public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH] Avoid some unnecessary set_cfun calls
@ 2013-11-13 10:52 Jakub Jelinek
  2013-11-13 11:17 ` Richard Biener
  2013-11-13 14:13 ` Martin Jambor
  0 siblings, 2 replies; 42+ messages in thread
From: Jakub Jelinek @ 2013-11-13 10:52 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches

Hi!

void f1 (void) {}
__attribute__((target ("avx"))) void f2 (void) {}
__attribute__((target ("avx2"))) void f3 (void) {}
__attribute__((target ("sse3"))) void f4 (void) {}
__attribute__((target ("ssse3"))) void f5 (void) {}
__attribute__((target ("sse4"))) void f6 (void) {}
takes about 3 seconds to compile at -O2, because set_cfun is terribly
expensive and there are hundreds of such calls.
The following patch is just a quick change to avoid some of them:
execute_function_todo starts with:
  unsigned int flags = (size_t)data;
  flags &= ~cfun->last_verified;
  if (!flags)
    return;
and if flags is initially zero, it does nothing.
Similarly, execute_function_dump has the whole body surrounded by
  if (dump_file && current_function_decl)
and thus if dump_file is NULL, there is nothing to do.
So IMHO in neither case (which happens pretty frequently) we need to
set_cfun to every function during IPA.

Also, I wonder if we couldn't defer the expensive ira_init, if the info
computed by it is used only during RTL optimization passes (haven't verified
it yet), then supposedly we could just remember using some target hook
what the last state was when we did ira_init last time, and call ira_init
again at the start of expansion or so if it is different from the last time.
For i?86/x86_64/ppc* this would be whether the current function's
DECL_FUNCTION_SPECIFIC_TARGET is the same as one for which ira_init has been
called, for rx whether interrupt attribute is the same and for mips whatever
is needed.

2013-11-13  Jakub Jelinek  <jakub@redhat.com>

	* passes.c (execute_todo): Don't call do_per_function if
	flags are zero.
	(execute_one_ipa_transform_pass, execute_one_pass): Don't call
	execute_function_dump if dump_file is NULL.

--- gcc/passes.c.jj	2013-11-12 11:31:30.000000000 +0100
+++ gcc/passes.c	2013-11-12 18:52:40.590727542 +0100
@@ -1875,7 +1875,8 @@ execute_todo (unsigned int flags)
 
   statistics_fini_pass ();
 
-  do_per_function (execute_function_todo, (void *)(size_t) flags);
+  if (flags)
+    do_per_function (execute_function_todo, (void *)(size_t) flags);
 
   /* Always remove functions just as before inlining: IPA passes might be
      interested to see bodies of extern inline functions that are not inlined
@@ -2065,7 +2066,8 @@ execute_one_ipa_transform_pass (struct c
   if (profile_report && cfun && (cfun->curr_properties & PROP_cfg))
     check_profile_consistency (pass->static_pass_number, 1, true);
 
-  do_per_function (execute_function_dump, NULL);
+  if (dump_file)
+    do_per_function (execute_function_dump, NULL);
   pass_fini_dump_file (pass);
 
   current_pass = NULL;
@@ -2231,7 +2233,8 @@ execute_one_pass (struct opt_pass *pass)
     check_profile_consistency (pass->static_pass_number, 1, true);
 
   verify_interpass_invariants ();
-  do_per_function (execute_function_dump, NULL);
+  if (dump_file)
+    do_per_function (execute_function_dump, NULL);
   if (pass->type == IPA_PASS)
     {
       struct cgraph_node *node;

	Jakub

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 10:52 [PATCH] Avoid some unnecessary set_cfun calls Jakub Jelinek
@ 2013-11-13 11:17 ` Richard Biener
  2013-11-13 11:27   ` Jakub Jelinek
  2013-11-13 14:13 ` Martin Jambor
  1 sibling, 1 reply; 42+ messages in thread
From: Richard Biener @ 2013-11-13 11:17 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: gcc-patches

On Wed, 13 Nov 2013, Jakub Jelinek wrote:

> Hi!
> 
> void f1 (void) {}
> __attribute__((target ("avx"))) void f2 (void) {}
> __attribute__((target ("avx2"))) void f3 (void) {}
> __attribute__((target ("sse3"))) void f4 (void) {}
> __attribute__((target ("ssse3"))) void f5 (void) {}
> __attribute__((target ("sse4"))) void f6 (void) {}
> takes about 3 seconds to compile at -O2, because set_cfun is terribly
> expensive and there are hundreds of such calls.
> The following patch is just a quick change to avoid some of them:
> execute_function_todo starts with:
>   unsigned int flags = (size_t)data;
>   flags &= ~cfun->last_verified;
>   if (!flags)
>     return;
> and if flags is initially zero, it does nothing.
> Similarly, execute_function_dump has the whole body surrounded by
>   if (dump_file && current_function_decl)
> and thus if dump_file is NULL, there is nothing to do.
> So IMHO in neither case (which happens pretty frequently) we need to
> set_cfun to every function during IPA.

Ok, but eventually all the TODO-called stuff should be made work
with a NULL cfun (and execute () get a struct function argument).

> Also, I wonder if we couldn't defer the expensive ira_init, if the info
> computed by it is used only during RTL optimization passes (haven't verified
> it yet), then supposedly we could just remember using some target hook
> what the last state was when we did ira_init last time, and call ira_init
> again at the start of expansion or so if it is different from the last time.
> For i?86/x86_64/ppc* this would be whether the current function's
> DECL_FUNCTION_SPECIFIC_TARGET is the same as one for which ira_init has been
> called, for rx whether interrupt attribute is the same and for mips whatever
> is needed.

I wonder why we cannot move all the stuff we re-init to a member
of struct function (or rather have a pointer to that info there
to cache it across functions with the same options).  That is,
get rid of more global state?  That would make switching back
and forth cheaper.

Thanks,
Richard.

> 2013-11-13  Jakub Jelinek  <jakub@redhat.com>
> 
> 	* passes.c (execute_todo): Don't call do_per_function if
> 	flags are zero.
> 	(execute_one_ipa_transform_pass, execute_one_pass): Don't call
> 	execute_function_dump if dump_file is NULL.
> 
> --- gcc/passes.c.jj	2013-11-12 11:31:30.000000000 +0100
> +++ gcc/passes.c	2013-11-12 18:52:40.590727542 +0100
> @@ -1875,7 +1875,8 @@ execute_todo (unsigned int flags)
>  
>    statistics_fini_pass ();
>  
> -  do_per_function (execute_function_todo, (void *)(size_t) flags);
> +  if (flags)
> +    do_per_function (execute_function_todo, (void *)(size_t) flags);
>  
>    /* Always remove functions just as before inlining: IPA passes might be
>       interested to see bodies of extern inline functions that are not inlined
> @@ -2065,7 +2066,8 @@ execute_one_ipa_transform_pass (struct c
>    if (profile_report && cfun && (cfun->curr_properties & PROP_cfg))
>      check_profile_consistency (pass->static_pass_number, 1, true);
>  
> -  do_per_function (execute_function_dump, NULL);
> +  if (dump_file)
> +    do_per_function (execute_function_dump, NULL);
>    pass_fini_dump_file (pass);
>  
>    current_pass = NULL;
> @@ -2231,7 +2233,8 @@ execute_one_pass (struct opt_pass *pass)
>      check_profile_consistency (pass->static_pass_number, 1, true);
>  
>    verify_interpass_invariants ();
> -  do_per_function (execute_function_dump, NULL);
> +  if (dump_file)
> +    do_per_function (execute_function_dump, NULL);
>    if (pass->type == IPA_PASS)
>      {
>        struct cgraph_node *node;
> 
> 	Jakub
> 
> 

-- 
Richard Biener <rguenther@suse.de>
SUSE / SUSE Labs
SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746
GF: Jeff Hawn, Jennifer Guild, Felix Imend

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 11:17 ` Richard Biener
@ 2013-11-13 11:27   ` Jakub Jelinek
  2013-11-13 11:38     ` Richard Biener
  2013-11-16 12:58     ` Richard Sandiford
  0 siblings, 2 replies; 42+ messages in thread
From: Jakub Jelinek @ 2013-11-13 11:27 UTC (permalink / raw)
  To: Richard Biener, Richard Sandiford, Michael Meissner; +Cc: gcc-patches

On Wed, Nov 13, 2013 at 11:27:10AM +0100, Richard Biener wrote:
> > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > computed by it is used only during RTL optimization passes (haven't verified
> > it yet), then supposedly we could just remember using some target hook
> > what the last state was when we did ira_init last time, and call ira_init
> > again at the start of expansion or so if it is different from the last time.
> > For i?86/x86_64/ppc* this would be whether the current function's
> > DECL_FUNCTION_SPECIFIC_TARGET is the same as one for which ira_init has been
> > called, for rx whether interrupt attribute is the same and for mips whatever
> > is needed.
> 
> I wonder why we cannot move all the stuff we re-init to a member
> of struct function (or rather have a pointer to that info there
> to cache it across functions with the same options).  That is,
> get rid of more global state?  That would make switching back
> and forth cheaper.

Isn't that what the SWITCHABLE_TARGET stuff is all about?
So, perhaps we should just define SWITCHABLE_TARGET on i?86/x86_64/powerpc*
(and rx if maintainer cares) and tweak it to attach somehow
struct target_globals * to TARGET_OPTION_NODE somehow.
A problem might be that lots of the save_target_globals
allocated structures are heap allocated rather than GC, so we might leak
memory.  Wonder if save_target_globals couldn't just compute the
aggregate size of all the structures it allocates with XCNEW right now
(plus required alignment if needed) and just allocate them together
with the ggc_alloc_target_globals after the target_globals structure
itself.

	Jakub

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 11:27   ` Jakub Jelinek
@ 2013-11-13 11:38     ` Richard Biener
  2013-11-13 11:45       ` Jakub Jelinek
  2013-11-16 12:58     ` Richard Sandiford
  1 sibling, 1 reply; 42+ messages in thread
From: Richard Biener @ 2013-11-13 11:38 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: Richard Sandiford, Michael Meissner, gcc-patches

On Wed, 13 Nov 2013, Jakub Jelinek wrote:

> On Wed, Nov 13, 2013 at 11:27:10AM +0100, Richard Biener wrote:
> > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > computed by it is used only during RTL optimization passes (haven't verified
> > > it yet), then supposedly we could just remember using some target hook
> > > what the last state was when we did ira_init last time, and call ira_init
> > > again at the start of expansion or so if it is different from the last time.
> > > For i?86/x86_64/ppc* this would be whether the current function's
> > > DECL_FUNCTION_SPECIFIC_TARGET is the same as one for which ira_init has been
> > > called, for rx whether interrupt attribute is the same and for mips whatever
> > > is needed.
> > 
> > I wonder why we cannot move all the stuff we re-init to a member
> > of struct function (or rather have a pointer to that info there
> > to cache it across functions with the same options).  That is,
> > get rid of more global state?  That would make switching back
> > and forth cheaper.
> 
> Isn't that what the SWITCHABLE_TARGET stuff is all about?

Maybe - I didn't follow all the changes in this area.

> So, perhaps we should just define SWITCHABLE_TARGET on i?86/x86_64/powerpc*
> (and rx if maintainer cares) and tweak it to attach somehow
> struct target_globals * to TARGET_OPTION_NODE somehow.
> A problem might be that lots of the save_target_globals
> allocated structures are heap allocated rather than GC, so we might leak
> memory.  Wonder if save_target_globals couldn't just compute the
> aggregate size of all the structures it allocates with XCNEW right now
> (plus required alignment if needed) and just allocate them together
> with the ggc_alloc_target_globals after the target_globals structure
> itself.

If you want to re-use it from functions with same options don't you
have a hashtable anyway?  You could add a reference count.

Richard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 11:38     ` Richard Biener
@ 2013-11-13 11:45       ` Jakub Jelinek
  2013-11-13 11:51         ` Richard Biener
  0 siblings, 1 reply; 42+ messages in thread
From: Jakub Jelinek @ 2013-11-13 11:45 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Sandiford, Michael Meissner, gcc-patches

On Wed, Nov 13, 2013 at 11:53:32AM +0100, Richard Biener wrote:
> > So, perhaps we should just define SWITCHABLE_TARGET on i?86/x86_64/powerpc*
> > (and rx if maintainer cares) and tweak it to attach somehow
> > struct target_globals * to TARGET_OPTION_NODE somehow.
> > A problem might be that lots of the save_target_globals
> > allocated structures are heap allocated rather than GC, so we might leak
> > memory.  Wonder if save_target_globals couldn't just compute the
> > aggregate size of all the structures it allocates with XCNEW right now
> > (plus required alignment if needed) and just allocate them together
> > with the ggc_alloc_target_globals after the target_globals structure
> > itself.
> 
> If you want to re-use it from functions with same options don't you
> have a hashtable anyway?  You could add a reference count.

build_target_option_node is such a hash table for that.

	Jakub

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 11:45       ` Jakub Jelinek
@ 2013-11-13 11:51         ` Richard Biener
  0 siblings, 0 replies; 42+ messages in thread
From: Richard Biener @ 2013-11-13 11:51 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: Richard Sandiford, Michael Meissner, gcc-patches

On Wed, 13 Nov 2013, Jakub Jelinek wrote:

> On Wed, Nov 13, 2013 at 11:53:32AM +0100, Richard Biener wrote:
> > > So, perhaps we should just define SWITCHABLE_TARGET on i?86/x86_64/powerpc*
> > > (and rx if maintainer cares) and tweak it to attach somehow
> > > struct target_globals * to TARGET_OPTION_NODE somehow.
> > > A problem might be that lots of the save_target_globals
> > > allocated structures are heap allocated rather than GC, so we might leak
> > > memory.  Wonder if save_target_globals couldn't just compute the
> > > aggregate size of all the structures it allocates with XCNEW right now
> > > (plus required alignment if needed) and just allocate them together
> > > with the ggc_alloc_target_globals after the target_globals structure
> > > itself.
> > 
> > If you want to re-use it from functions with same options don't you
> > have a hashtable anyway?  You could add a reference count.
> 
> build_target_option_node is such a hash table for that.

Ah, and we already have some custom pointers in the tree node.  Looks
like a suitable place to put in memory management then.

Richard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 10:52 [PATCH] Avoid some unnecessary set_cfun calls Jakub Jelinek
  2013-11-13 11:17 ` Richard Biener
@ 2013-11-13 14:13 ` Martin Jambor
  2013-11-13 14:20   ` Richard Biener
  1 sibling, 1 reply; 42+ messages in thread
From: Martin Jambor @ 2013-11-13 14:13 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: Richard Biener, gcc-patches

Hi,

On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> Hi!
> 
> void f1 (void) {}
> __attribute__((target ("avx"))) void f2 (void) {}
> __attribute__((target ("avx2"))) void f3 (void) {}
> __attribute__((target ("sse3"))) void f4 (void) {}
> __attribute__((target ("ssse3"))) void f5 (void) {}
> __attribute__((target ("sse4"))) void f6 (void) {}
> takes about 3 seconds to compile at -O2, because set_cfun is terribly
> expensive and there are hundreds of such calls.
> The following patch is just a quick change to avoid some of them:
> execute_function_todo starts with:
>   unsigned int flags = (size_t)data;
>   flags &= ~cfun->last_verified;
>   if (!flags)
>     return;
> and if flags is initially zero, it does nothing.
> Similarly, execute_function_dump has the whole body surrounded by
>   if (dump_file && current_function_decl)
> and thus if dump_file is NULL, there is nothing to do.
> So IMHO in neither case (which happens pretty frequently) we need to
> set_cfun to every function during IPA.
> 
> Also, I wonder if we couldn't defer the expensive ira_init, if the info
> computed by it is used only during RTL optimization passes (haven't verified
> it yet), then supposedly we could just remember using some target hook
> what the last state was when we did ira_init last time, and call ira_init
> again at the start of expansion or so if it is different from the
> last time.

I was wondering whether the expensive parts of set_cfun could only be
run in pass_all_optimizations (and the -Og equivalent) but not when
changing functions in early and IPA passes.

Martin

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 14:13 ` Martin Jambor
@ 2013-11-13 14:20   ` Richard Biener
  2013-11-13 14:40     ` Martin Jambor
  2013-11-13 14:46     ` David Malcolm
  0 siblings, 2 replies; 42+ messages in thread
From: Richard Biener @ 2013-11-13 14:20 UTC (permalink / raw)
  To: Martin Jambor; +Cc: Jakub Jelinek, gcc-patches

On Wed, 13 Nov 2013, Martin Jambor wrote:

> Hi,
> 
> On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > Hi!
> > 
> > void f1 (void) {}
> > __attribute__((target ("avx"))) void f2 (void) {}
> > __attribute__((target ("avx2"))) void f3 (void) {}
> > __attribute__((target ("sse3"))) void f4 (void) {}
> > __attribute__((target ("ssse3"))) void f5 (void) {}
> > __attribute__((target ("sse4"))) void f6 (void) {}
> > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > expensive and there are hundreds of such calls.
> > The following patch is just a quick change to avoid some of them:
> > execute_function_todo starts with:
> >   unsigned int flags = (size_t)data;
> >   flags &= ~cfun->last_verified;
> >   if (!flags)
> >     return;
> > and if flags is initially zero, it does nothing.
> > Similarly, execute_function_dump has the whole body surrounded by
> >   if (dump_file && current_function_decl)
> > and thus if dump_file is NULL, there is nothing to do.
> > So IMHO in neither case (which happens pretty frequently) we need to
> > set_cfun to every function during IPA.
> > 
> > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > computed by it is used only during RTL optimization passes (haven't verified
> > it yet), then supposedly we could just remember using some target hook
> > what the last state was when we did ira_init last time, and call ira_init
> > again at the start of expansion or so if it is different from the
> > last time.
> 
> I was wondering whether the expensive parts of set_cfun could only be
> run in pass_all_optimizations (and the -Og equivalent) but not when
> changing functions in early and IPA passes.

Sounds like a hack ;)

Better get things working without the cfun/current_function_decl globals.
Wasn't there someone replacing all implicit uses with explicit ones
for stuff like n_basic_blocks?

Richard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 14:20   ` Richard Biener
@ 2013-11-13 14:40     ` Martin Jambor
  2013-11-13 14:46     ` David Malcolm
  1 sibling, 0 replies; 42+ messages in thread
From: Martin Jambor @ 2013-11-13 14:40 UTC (permalink / raw)
  To: Richard Biener; +Cc: Jakub Jelinek, gcc-patches

On Wed, Nov 13, 2013 at 01:53:00PM +0100, Richard Biener wrote:
> On Wed, 13 Nov 2013, Martin Jambor wrote:
> 
> > Hi,
> > 
> > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > Hi!
> > > 
> > > void f1 (void) {}
> > > __attribute__((target ("avx"))) void f2 (void) {}
> > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > expensive and there are hundreds of such calls.
> > > The following patch is just a quick change to avoid some of them:
> > > execute_function_todo starts with:
> > >   unsigned int flags = (size_t)data;
> > >   flags &= ~cfun->last_verified;
> > >   if (!flags)
> > >     return;
> > > and if flags is initially zero, it does nothing.
> > > Similarly, execute_function_dump has the whole body surrounded by
> > >   if (dump_file && current_function_decl)
> > > and thus if dump_file is NULL, there is nothing to do.
> > > So IMHO in neither case (which happens pretty frequently) we need to
> > > set_cfun to every function during IPA.
> > > 
> > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > computed by it is used only during RTL optimization passes (haven't verified
> > > it yet), then supposedly we could just remember using some target hook
> > > what the last state was when we did ira_init last time, and call ira_init
> > > again at the start of expansion or so if it is different from the
> > > last time.
> > 
> > I was wondering whether the expensive parts of set_cfun could only be
> > run in pass_all_optimizations (and the -Og equivalent) but not when
> > changing functions in early and IPA passes.
> 
> Sounds like a hack ;)

Well, a little bit.

> 
> Better get things working without the cfun/current_function_decl globals.
> Wasn't there someone replacing all implicit uses with explicit ones
> for stuff like n_basic_blocks?

I'm not so sure, I think that having an implicit value for the
function parameter makes sense in all intraprocedural passes.  But it
would be great if it was no more than an implicit parameter.

One item on my TODO list is to try and make the at least the
summary-building stages of IPA passes not depend on cfun.  That should
be easy if they did not modify the function bodies.  But PR 54477
shows that they do and the bug has so far scared me away.

Martin

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 14:20   ` Richard Biener
  2013-11-13 14:40     ` Martin Jambor
@ 2013-11-13 14:46     ` David Malcolm
  2013-11-13 15:22       ` Richard Biener
  1 sibling, 1 reply; 42+ messages in thread
From: David Malcolm @ 2013-11-13 14:46 UTC (permalink / raw)
  To: Richard Biener; +Cc: Martin Jambor, Jakub Jelinek, gcc-patches

On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
> On Wed, 13 Nov 2013, Martin Jambor wrote:
> 
> > Hi,
> > 
> > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > Hi!
> > > 
> > > void f1 (void) {}
> > > __attribute__((target ("avx"))) void f2 (void) {}
> > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > expensive and there are hundreds of such calls.
> > > The following patch is just a quick change to avoid some of them:
> > > execute_function_todo starts with:
> > >   unsigned int flags = (size_t)data;
> > >   flags &= ~cfun->last_verified;
> > >   if (!flags)
> > >     return;
> > > and if flags is initially zero, it does nothing.
> > > Similarly, execute_function_dump has the whole body surrounded by
> > >   if (dump_file && current_function_decl)
> > > and thus if dump_file is NULL, there is nothing to do.
> > > So IMHO in neither case (which happens pretty frequently) we need to
> > > set_cfun to every function during IPA.
> > > 
> > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > computed by it is used only during RTL optimization passes (haven't verified
> > > it yet), then supposedly we could just remember using some target hook
> > > what the last state was when we did ira_init last time, and call ira_init
> > > again at the start of expansion or so if it is different from the
> > > last time.
> > 
> > I was wondering whether the expensive parts of set_cfun could only be
> > run in pass_all_optimizations (and the -Og equivalent) but not when
> > changing functions in early and IPA passes.
> 
> Sounds like a hack ;)
> 
> Better get things working without the cfun/current_function_decl globals.
> Wasn't there someone replacing all implicit uses with explicit ones
> for stuff like n_basic_blocks?

I was working on this:
http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
though I switched to other tasks I felt were higher priority; sorry.

Do you still want me to go ahead and commit the series of changes you
pre-approved there?

i.e. the "n_basic_blocks" macro goes away in favor of:
   n_basic_blocks_for_fn (cfun)
as a renaming of the existing n_basic_blocks_for_function macro,
followed up by analogous changes to the other macros.

Or should I repost before committing?

Dave

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 14:46     ` David Malcolm
@ 2013-11-13 15:22       ` Richard Biener
  2013-11-16 10:49         ` [PATCH] Eliminate n_basic_blocks macro (was Re: [PATCH] Avoid some unnecessary set_cfun calls) David Malcolm
  0 siblings, 1 reply; 42+ messages in thread
From: Richard Biener @ 2013-11-13 15:22 UTC (permalink / raw)
  To: David Malcolm; +Cc: Martin Jambor, Jakub Jelinek, gcc-patches

On Wed, 13 Nov 2013, David Malcolm wrote:

> On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
> > On Wed, 13 Nov 2013, Martin Jambor wrote:
> > 
> > > Hi,
> > > 
> > > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > > Hi!
> > > > 
> > > > void f1 (void) {}
> > > > __attribute__((target ("avx"))) void f2 (void) {}
> > > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > > expensive and there are hundreds of such calls.
> > > > The following patch is just a quick change to avoid some of them:
> > > > execute_function_todo starts with:
> > > >   unsigned int flags = (size_t)data;
> > > >   flags &= ~cfun->last_verified;
> > > >   if (!flags)
> > > >     return;
> > > > and if flags is initially zero, it does nothing.
> > > > Similarly, execute_function_dump has the whole body surrounded by
> > > >   if (dump_file && current_function_decl)
> > > > and thus if dump_file is NULL, there is nothing to do.
> > > > So IMHO in neither case (which happens pretty frequently) we need to
> > > > set_cfun to every function during IPA.
> > > > 
> > > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > > computed by it is used only during RTL optimization passes (haven't verified
> > > > it yet), then supposedly we could just remember using some target hook
> > > > what the last state was when we did ira_init last time, and call ira_init
> > > > again at the start of expansion or so if it is different from the
> > > > last time.
> > > 
> > > I was wondering whether the expensive parts of set_cfun could only be
> > > run in pass_all_optimizations (and the -Og equivalent) but not when
> > > changing functions in early and IPA passes.
> > 
> > Sounds like a hack ;)
> > 
> > Better get things working without the cfun/current_function_decl globals.
> > Wasn't there someone replacing all implicit uses with explicit ones
> > for stuff like n_basic_blocks?
> 
> I was working on this:
> http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
> though I switched to other tasks I felt were higher priority; sorry.
> 
> Do you still want me to go ahead and commit the series of changes you
> pre-approved there?
> 
> i.e. the "n_basic_blocks" macro goes away in favor of:
>    n_basic_blocks_for_fn (cfun)
> as a renaming of the existing n_basic_blocks_for_function macro,
> followed up by analogous changes to the other macros.
> 
> Or should I repost before committing?

I'd say create the n_basic_blocks patch and post it, that gives
people a chance to object.  If nobody chimes in I approve it
and pre-approve the rest ;)

Using n_basic_blocks_for_fn (cfun) might feel backwards if
eventually we'd want to C++-ify struct function and make
n_basic_blocks a member function which would make it
cfun->n_basic_blocks () instead.  Ok, I think that will get
us into C++ bikeshedding again ;)

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH] Eliminate n_basic_blocks macro (was Re: [PATCH] Avoid some unnecessary set_cfun calls)
  2013-11-13 15:22       ` Richard Biener
@ 2013-11-16 10:49         ` David Malcolm
  2013-11-19  5:27           ` David Malcolm
  0 siblings, 1 reply; 42+ messages in thread
From: David Malcolm @ 2013-11-16 10:49 UTC (permalink / raw)
  To: Richard Biener; +Cc: Martin Jambor, Jakub Jelinek, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 3880 bytes --]

On Wed, 2013-11-13 at 14:44 +0100, Richard Biener wrote:
> On Wed, 13 Nov 2013, David Malcolm wrote:
> 
> > On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
> > > On Wed, 13 Nov 2013, Martin Jambor wrote:
> > > 
> > > > Hi,
> > > > 
> > > > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > > > Hi!
> > > > > 
> > > > > void f1 (void) {}
> > > > > __attribute__((target ("avx"))) void f2 (void) {}
> > > > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > > > expensive and there are hundreds of such calls.
> > > > > The following patch is just a quick change to avoid some of them:
> > > > > execute_function_todo starts with:
> > > > >   unsigned int flags = (size_t)data;
> > > > >   flags &= ~cfun->last_verified;
> > > > >   if (!flags)
> > > > >     return;
> > > > > and if flags is initially zero, it does nothing.
> > > > > Similarly, execute_function_dump has the whole body surrounded by
> > > > >   if (dump_file && current_function_decl)
> > > > > and thus if dump_file is NULL, there is nothing to do.
> > > > > So IMHO in neither case (which happens pretty frequently) we need to
> > > > > set_cfun to every function during IPA.
> > > > > 
> > > > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > > > computed by it is used only during RTL optimization passes (haven't verified
> > > > > it yet), then supposedly we could just remember using some target hook
> > > > > what the last state was when we did ira_init last time, and call ira_init
> > > > > again at the start of expansion or so if it is different from the
> > > > > last time.
> > > > 
> > > > I was wondering whether the expensive parts of set_cfun could only be
> > > > run in pass_all_optimizations (and the -Og equivalent) but not when
> > > > changing functions in early and IPA passes.
> > > 
> > > Sounds like a hack ;)
> > > 
> > > Better get things working without the cfun/current_function_decl globals.
> > > Wasn't there someone replacing all implicit uses with explicit ones
> > > for stuff like n_basic_blocks?
> > 
> > I was working on this:
> > http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
> > though I switched to other tasks I felt were higher priority; sorry.
> > 
> > Do you still want me to go ahead and commit the series of changes you
> > pre-approved there?
> > 
> > i.e. the "n_basic_blocks" macro goes away in favor of:
> >    n_basic_blocks_for_fn (cfun)
> > as a renaming of the existing n_basic_blocks_for_function macro,
> > followed up by analogous changes to the other macros.
> > 
> > Or should I repost before committing?
> 
> I'd say create the n_basic_blocks patch and post it, that gives
> people a chance to object.  If nobody chimes in I approve it
> and pre-approve the rest ;)
> 
> Using n_basic_blocks_for_fn (cfun) might feel backwards if
> eventually we'd want to C++-ify struct function and make
> n_basic_blocks a member function which would make it
> cfun->n_basic_blocks () instead.  Ok, I think that will get
> us into C++ bikeshedding again ;)

[I can't face another C vs C++ discussion right now :)]

Thanks.  Attached is such a patch, eliminating the:
  n_basic_blocks
macro in favor of
  n_basic_blocks_for_fn (cfun)

Successfully bootstrapped on x86_64-unknown-linux-gnu, and successfully
compiled stage1 on spu-unknown-elf and s390-linux-gnu (given that those
config files are affected).

Given the conditional pre-approval above, I'm posting here to give
people a change to object - otherwise I'll commit, and followup with the
other macros that implicitly use cfun as per the thread linked to above.



[-- Attachment #2: eliminate-n_basic_blocks.patch --]
[-- Type: text/x-patch, Size: 65887 bytes --]

commit a53ca61da66612f7daba8e5e0a26faeaa699507f
Author: David Malcolm <dmalcolm@redhat.com>
Date:   Fri Nov 1 13:29:39 2013 -0400

    Eliminate n_basic_blocks macro
    
    gcc/
    	* basic-block.h (n_basic_blocks_for_function): Rename macro to...
    	(n_basic_blocks_for_fn): ...this.
    
    	(n_basic_blocks): Eliminate macro as work towards making uses of
    	cfun be explicit.
    
    	* cfgloop.c (init_loops_structure): Update for renaming of
    	"n_basic_blocks_for_function" to "n_basic_blocks_for_fn".
    	* graph.c (draw_cfg_nodes_no_loops): Likewise.
    	* ipa-utils.c (ipa_merge_profiles): Likewise.
    	* lto-streamer-in.c (make_new_block): Likewise.
    	* tree-cfg.c (init_empty_tree_cfg_for_function): Likewise.
    	(dump_function_to_file): Likewise.
    
    	* alias.c (init_alias_analysis): Replace usage of "n_basic_blocks"
    	macro with "n_basic_blocks_for_fn (cfun)".
    	* bb-reorder.c (partition_hot_cold_basic_blocks): Likewise.
    	(duplicate_computed_gotos): Likewise.
    	(reorder_basic_blocks): Likewise.
    	* bt-load.c (augment_live_range): Likewise.
    	* cfg.c (expunge_block): Likewise.
    	(compact_blocks): Likewise.
    	* cfganal.c (single_pred_before_succ_order): Likewise.
    	(compute_idf): Likewise.
    	(flow_dfs_compute_reverse_init): Likewise.
    	(pre_and_rev_post_order_compute): Likewise.
    	(pre_and_rev_post_order_compute_fn): Likewise.
    	(inverted_post_order_compute): Likewise.
    	(post_order_compute): Likewise.
    	(print_edge_list): Likewise.
    	(find_unreachable_blocks): Likewise.
    	(mark_dfs_back_edges): Likewise.
    	* cfgcleanup.c (try_optimize_cfg): Likewise.
    	(try_forward_edges): Likewise.
    	* cfghooks.c (dump_flow_info): Likewise.
    	* cfgloop.c (verify_loop_structure): Likewise.
    	(get_loop_body): Likewise.
    	(flow_loops_find): Likewise.
    	* cfgloopmanip.c (add_loop): Likewise.
    	(remove_path): Likewise.
    	(find_path): Likewise.
    	* cfgrtl.c (rtl_flow_call_edges_add): Likewise.
    	(rtl_verify_bb_layout): Likewise.
    	(entry_of_function): Likewise.
    	(rtl_create_basic_block): Likewise.
    	* coverage.c (coverage_compute_cfg_checksum): Likewise.
    	* cprop.c (one_cprop_pass): Likewise.
    	(is_too_expensive): Likewise.
    	* df-core.c (df_compute_cfg_image): Likewise.
    	(df_compact_blocks): Likewise.
    	(df_worklist_dataflow_doublequeue): Likewise.
    	* dominance.c (calculate_dominance_info): Likewise.
    	(calc_dfs_tree): Likewise.
    	(calc_dfs_tree_nonrec): Likewise.
    	(init_dom_info): Likewise.
    	* domwalk.c (cmp_bb_postorder): Likewise.
    	* function.c (thread_prologue_and_epilogue_insns): Likewise.
    	(generate_setjmp_warnings): Likewise.
    	* fwprop.c (build_single_def_use_links): Likewise.
    	* gcse.c (is_too_expensive): Likewise.
    	(one_code_hoisting_pass): Likewise.
    	(one_pre_gcse_pass): Likewise.
    	* graphite.c (graphite_initialize): Likewise.
    	* haifa-sched.c (haifa_sched_init): Likewise.
    	* ipa-inline-analysis.c (estimate_function_body_sizes): Likewise.
    	* ira-build.c (ira_build): Likewise.
    	* lcm.c (compute_nearerout): Likewise.
    	(compute_available): Likewise.
    	(compute_laterin): Likewise.
    	(compute_antinout_edge): Likewise.
    	* lra-lives.c (lra_create_live_ranges): Likewise.
    	* lra.c (has_nonexceptional_receiver): Likewise.
    	* mcf.c (create_fixup_graph): Likewise.
    	* profile.c (branch_prob): Likewise.
    	* reg-stack.c (convert_regs_2): Likewise.
    	* regrename.c (regrename_analyze): Likewise.
    	* reload1.c (has_nonexceptional_receiver): Likewise.
    	* reorg.c (dbr_schedule): Likewise.
    	* sched-deps.c (sched_deps_init): Likewise.
    	* sched-ebb.c (schedule_ebbs): Likewise.
    	* sched-rgn.c (extend_regions): Likewise.
    	(schedule_insns): Likewise.
    	(sched_rgn_init): Likewise.
    	(extend_rgns): Likewise.
    	(haifa_find_rgns): Likewise.
    	* sel-sched-ir.c (recompute_rev_top_order): Likewise.
    	(sel_recompute_toporder): Likewise.
    	* sel-sched.c (run_selective_scheduling): Likewise.
    	* store-motion.c (one_store_motion_pass): Likewise.
    	(remove_reachable_equiv_notes): Likewise.
    	* tracer.c (tracer): Likewise.
    	(tail_duplicate): Likewise.
    	* tree-cfg.c (gimple_flow_call_edges_add): Likewise.
    	(dump_cfg_stats): Likewise.
    	(gimple_dump_cfg): Likewise.
    	(create_bb): Likewise.
    	(build_gimple_cfg): Likewise.
    	* tree-cfgcleanup.c (merge_phi_nodes): Likewise.
    	* tree-inline.c (optimize_inline_calls): Likewise.
    	(fold_marked_statements): Likewise.
    	* tree-ssa-ifcombine.c (tree_ssa_ifcombine): Likewise.
    	* tree-ssa-loop-ch.c (copy_loop_headers): Likewise.
    	* tree-ssa-loop-im.c (analyze_memory_references): Likewise.
    	* tree-ssa-loop-manip.c (compute_live_loop_exits): Likewise.
    	* tree-ssa-math-opts.c (execute_cse_reciprocals): Likewise.
    	* tree-ssa-phiopt.c (tree_ssa_phiopt_worker): Likewise.
    	* tree-ssa-pre.c (do_pre): Likewise.
    	(init_pre): Likewise.
    	(compute_avail): Likewise.
    	* tree-ssa-reassoc.c (init_reassoc): Likewise.
    	* tree-ssa-sccvn.c (init_scc_vn): Likewise.
    	* tree-ssa-tail-merge.c (alloc_cluster_vectors): Likewise.
    	(init_worklist): Likewise.
    	* tree-ssa-uncprop.c (associate_equivalences_with_edges): Likewise.
    	* var-tracking.c (variable_tracking_main_1): Likewise.
    	(vt_find_locations): Likewise.
    	(vt_stack_adjustments): Likewise.
    	* config/s390/s390.c (s390_optimize_nonescaping_tx): Likewise.
    	* config/spu/spu.c (spu_machine_dependent_reorg): Likewise.

diff --git a/gcc/alias.c b/gcc/alias.c
index 1736169..0cf5655 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -2952,7 +2952,7 @@ init_alias_analysis (void)
      The state of the arrays for the set chain in question does not matter
      since the program has undefined behavior.  */
 
-  rpo = XNEWVEC (int, n_basic_blocks);
+  rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
   rpo_cnt = pre_and_rev_post_order_compute (NULL, rpo, false);
 
   pass = 0;
diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 9c28f14..74384b2 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -315,7 +315,7 @@ struct GTY(()) control_flow_graph {
 #define ENTRY_BLOCK_PTR_FOR_FUNCTION(FN)     ((FN)->cfg->x_entry_block_ptr)
 #define EXIT_BLOCK_PTR_FOR_FUNCTION(FN)	     ((FN)->cfg->x_exit_block_ptr)
 #define basic_block_info_for_function(FN)    ((FN)->cfg->x_basic_block_info)
-#define n_basic_blocks_for_function(FN)	     ((FN)->cfg->x_n_basic_blocks)
+#define n_basic_blocks_for_fn(FN)	     ((FN)->cfg->x_n_basic_blocks)
 #define n_edges_for_function(FN)	     ((FN)->cfg->x_n_edges)
 #define last_basic_block_for_function(FN)    ((FN)->cfg->x_last_basic_block)
 #define label_to_block_map_for_function(FN)  ((FN)->cfg->x_label_to_block_map)
@@ -330,7 +330,6 @@ struct GTY(()) control_flow_graph {
 #define ENTRY_BLOCK_PTR		(cfun->cfg->x_entry_block_ptr)
 #define EXIT_BLOCK_PTR		(cfun->cfg->x_exit_block_ptr)
 #define basic_block_info	(cfun->cfg->x_basic_block_info)
-#define n_basic_blocks		(cfun->cfg->x_n_basic_blocks)
 #define n_edges			(cfun->cfg->x_n_edges)
 #define last_basic_block	(cfun->cfg->x_last_basic_block)
 #define label_to_block_map	(cfun->cfg->x_label_to_block_map)
diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index 8e2348f..45bf128 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -2220,7 +2220,7 @@ reorder_basic_blocks (void)
 
   gcc_assert (current_ir_type () == IR_RTL_CFGLAYOUT);
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1)
     return;
 
   set_edge_can_fallthru_flag ();
@@ -2244,7 +2244,7 @@ reorder_basic_blocks (void)
       bbd[i].node = NULL;
     }
 
-  traces = XNEWVEC (struct trace, n_basic_blocks);
+  traces = XNEWVEC (struct trace, n_basic_blocks_for_fn (cfun));
   n_traces = 0;
   find_traces (&n_traces, traces);
   connect_traces (n_traces, traces);
@@ -2388,7 +2388,7 @@ duplicate_computed_gotos (void)
   bitmap candidates;
   int max_size;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   clear_bb_flags ();
@@ -2640,7 +2640,7 @@ partition_hot_cold_basic_blocks (void)
 {
   vec<edge> crossing_edges;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   df_set_flags (DF_DEFER_INSN_RESCAN);
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index 5384d01..348e40b 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -900,7 +900,7 @@ augment_live_range (bitmap live_range, HARD_REG_SET *btrs_live_in_range,
 {
   basic_block *worklist, *tos;
 
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
   if (dominated_by_p (CDI_DOMINATORS, new_bb, head_bb))
     {
diff --git a/gcc/cfg.c b/gcc/cfg.c
index cfada73..10791a7 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -169,12 +169,12 @@ compact_blocks (void)
 	  bb->index = i;
 	  i++;
 	}
-      gcc_assert (i == n_basic_blocks);
+      gcc_assert (i == n_basic_blocks_for_fn (cfun));
 
       for (; i < last_basic_block; i++)
 	SET_BASIC_BLOCK (i, NULL);
     }
-  last_basic_block = n_basic_blocks;
+  last_basic_block = n_basic_blocks_for_fn (cfun);
 }
 
 /* Remove block B from the basic block array.  */
@@ -184,7 +184,7 @@ expunge_block (basic_block b)
 {
   unlink_block (b);
   SET_BASIC_BLOCK (b->index, NULL);
-  n_basic_blocks--;
+  n_basic_blocks_for_fn (cfun)--;
   /* We should be able to ggc_free here, but we are not.
      The dead SSA_NAMES are left pointing to dead statements that are pointing
      to dead basic blocks making garbage collector to die.
diff --git a/gcc/cfganal.c b/gcc/cfganal.c
index b221611..1c90f8c 100644
--- a/gcc/cfganal.c
+++ b/gcc/cfganal.c
@@ -76,7 +76,7 @@ mark_dfs_back_edges (void)
   post = XCNEWVEC (int, last_basic_block);
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -152,7 +152,7 @@ find_unreachable_blocks (void)
   edge_iterator ei;
   basic_block *tos, *worklist, bb;
 
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
 
   /* Clear all the reachability flags.  */
 
@@ -256,7 +256,7 @@ print_edge_list (FILE *f, struct edge_list *elist)
   int x;
 
   fprintf (f, "Compressed edge list, %d BBs + entry & exit, and %d edges\n",
-	   n_basic_blocks, elist->num_edges);
+	   n_basic_blocks_for_fn (cfun), elist->num_edges);
 
   for (x = 0; x < elist->num_edges; x++)
     {
@@ -609,7 +609,7 @@ post_order_compute (int *post_order, bool include_entry_exit,
     post_order[post_order_num++] = EXIT_BLOCK;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -667,7 +667,7 @@ post_order_compute (int *post_order, bool include_entry_exit,
 
   /* Delete the unreachable blocks if some were found and we are
      supposed to do it.  */
-  if (delete_unreachable && (count != n_basic_blocks))
+  if (delete_unreachable && (count != n_basic_blocks_for_fn (cfun)))
     {
       basic_block b;
       basic_block next_bb;
@@ -762,7 +762,7 @@ inverted_post_order_compute (int *post_order)
   sbitmap visited;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -898,11 +898,11 @@ pre_and_rev_post_order_compute_fn (struct function *fn,
   edge_iterator *stack;
   int sp;
   int pre_order_num = 0;
-  int rev_post_order_num = n_basic_blocks - 1;
+  int rev_post_order_num = n_basic_blocks_for_fn (cfun) - 1;
   sbitmap visited;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   if (include_entry_exit)
@@ -1000,11 +1000,12 @@ pre_and_rev_post_order_compute (int *pre_order, int *rev_post_order,
 					 include_entry_exit);
   if (include_entry_exit)
     /* The number of nodes visited should be the number of blocks.  */
-    gcc_assert (pre_order_num == n_basic_blocks);
+    gcc_assert (pre_order_num == n_basic_blocks_for_fn (cfun));
   else
     /* The number of nodes visited should be the number of blocks minus
        the entry and exit blocks which are not visited here.  */
-    gcc_assert (pre_order_num == n_basic_blocks - NUM_FIXED_BLOCKS);
+    gcc_assert (pre_order_num
+		== (n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS));
 
   return pre_order_num;
 }
@@ -1043,7 +1044,7 @@ static void
 flow_dfs_compute_reverse_init (depth_first_search_ds data)
 {
   /* Allocate stack for back-tracking up CFG.  */
-  data->stack = XNEWVEC (basic_block, n_basic_blocks);
+  data->stack = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   data->sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -1275,7 +1276,7 @@ compute_idf (bitmap def_blocks, bitmap_head *dfs)
   bitmap phi_insertion_points;
 
   /* Each block can appear at most twice on the work-stack.  */
-  work_stack.create (2 * n_basic_blocks);
+  work_stack.create (2 * n_basic_blocks_for_fn (cfun));
   phi_insertion_points = BITMAP_ALLOC (NULL);
 
   /* Seed the work list with all the blocks in DEF_BLOCKS.  We use
@@ -1493,8 +1494,8 @@ basic_block *
 single_pred_before_succ_order (void)
 {
   basic_block x, y;
-  basic_block *order = XNEWVEC (basic_block, n_basic_blocks);
-  unsigned n = n_basic_blocks - NUM_FIXED_BLOCKS;
+  basic_block *order = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
+  unsigned n = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
   unsigned np, i;
   sbitmap visited = sbitmap_alloc (last_basic_block);
 
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index 5161190..a2192cb 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -459,7 +459,7 @@ try_forward_edges (int mode, basic_block b)
 	  && find_reg_note (BB_END (first), REG_CROSSING_JUMP, NULL_RTX))
 	return changed;
 
-      while (counter < n_basic_blocks)
+      while (counter < n_basic_blocks_for_fn (cfun))
 	{
 	  basic_block new_target = NULL;
 	  bool new_target_threaded = false;
@@ -472,7 +472,7 @@ try_forward_edges (int mode, basic_block b)
 	      /* Bypass trivial infinite loops.  */
 	      new_target = single_succ (target);
 	      if (target == new_target)
-		counter = n_basic_blocks;
+		counter = n_basic_blocks_for_fn (cfun);
 	      else if (!optimize)
 		{
 		  /* When not optimizing, ensure that edges or forwarder
@@ -521,7 +521,8 @@ try_forward_edges (int mode, basic_block b)
 	      if (t)
 		{
 		  if (!threaded_edges)
-		    threaded_edges = XNEWVEC (edge, n_basic_blocks);
+		    threaded_edges = XNEWVEC (edge,
+					      n_basic_blocks_for_fn (cfun));
 		  else
 		    {
 		      int i;
@@ -533,7 +534,7 @@ try_forward_edges (int mode, basic_block b)
 			  break;
 		      if (i < nthreaded_edges)
 			{
-			  counter = n_basic_blocks;
+			  counter = n_basic_blocks_for_fn (cfun);
 			  break;
 			}
 		    }
@@ -542,7 +543,9 @@ try_forward_edges (int mode, basic_block b)
 		  if (t->dest == b)
 		    break;
 
-		  gcc_assert (nthreaded_edges < n_basic_blocks - NUM_FIXED_BLOCKS);
+		  gcc_assert (nthreaded_edges
+			      < (n_basic_blocks_for_fn (cfun)
+				 - NUM_FIXED_BLOCKS));
 		  threaded_edges[nthreaded_edges++] = t;
 
 		  new_target = t->dest;
@@ -558,7 +561,7 @@ try_forward_edges (int mode, basic_block b)
 	  threaded |= new_target_threaded;
 	}
 
-      if (counter >= n_basic_blocks)
+      if (counter >= n_basic_blocks_for_fn (cfun))
 	{
 	  if (dump_file)
 	    fprintf (dump_file, "Infinite loop in BB %i.\n",
@@ -2713,7 +2716,7 @@ try_optimize_cfg (int mode)
 		  /* Note that forwarder_block_p true ensures that
 		     there is a successor for this block.  */
 		  && (single_succ_edge (b)->flags & EDGE_FALLTHRU)
-		  && n_basic_blocks > NUM_FIXED_BLOCKS + 1)
+		  && n_basic_blocks_for_fn (cfun) > NUM_FIXED_BLOCKS + 1)
 		{
 		  if (dump_file)
 		    fprintf (dump_file,
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index c12a62f..3016c54 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -323,7 +323,8 @@ dump_flow_info (FILE *file, int flags)
 {
   basic_block bb;
 
-  fprintf (file, "\n%d basic blocks, %d edges.\n", n_basic_blocks, n_edges);
+  fprintf (file, "\n%d basic blocks, %d edges.\n", n_basic_blocks_for_fn (cfun),
+	   n_edges);
   FOR_ALL_BB (bb)
     dump_bb (file, bb, 0, flags);
 
diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
index 3ff8e84..d5abf89 100644
--- a/gcc/cfgloop.c
+++ b/gcc/cfgloop.c
@@ -351,7 +351,7 @@ init_loops_structure (struct function *fn,
 
   /* Dummy loop containing whole function.  */
   root = alloc_loop ();
-  root->num_nodes = n_basic_blocks_for_function (fn);
+  root->num_nodes = n_basic_blocks_for_fn (fn);
   root->latch = EXIT_BLOCK_PTR_FOR_FUNCTION (fn);
   root->header = ENTRY_BLOCK_PTR_FOR_FUNCTION (fn);
   ENTRY_BLOCK_PTR_FOR_FUNCTION (fn)->loop_father = root;
@@ -421,21 +421,21 @@ flow_loops_find (struct loops *loops)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return loops;
 
   /* The root loop node contains all basic-blocks.  */
-  loops->tree_root->num_nodes = n_basic_blocks;
+  loops->tree_root->num_nodes = n_basic_blocks_for_fn (cfun);
 
   /* Compute depth first search order of the CFG so that outer
      natural loops will be found before inner natural loops.  */
-  rc_order = XNEWVEC (int, n_basic_blocks);
+  rc_order = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
   pre_and_rev_post_order_compute (NULL, rc_order, false);
 
   /* Gather all loop headers in reverse completion order and allocate
      loop structures for loops that are not already present.  */
   larray.create (loops->larray->length ());
-  for (b = 0; b < n_basic_blocks - NUM_FIXED_BLOCKS; b++)
+  for (b = 0; b < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; b++)
     {
       basic_block header = BASIC_BLOCK (rc_order[b]);
       if (bb_loop_header_p (header))
@@ -831,7 +831,7 @@ get_loop_body (const struct loop *loop)
     {
       /* There may be blocks unreachable from EXIT_BLOCK, hence we need to
 	 special-case the fake loop that contains the whole function.  */
-      gcc_assert (loop->num_nodes == (unsigned) n_basic_blocks);
+      gcc_assert (loop->num_nodes == (unsigned) n_basic_blocks_for_fn (cfun));
       body[tv++] = loop->header;
       body[tv++] = EXIT_BLOCK_PTR;
       FOR_EACH_BB (bb)
@@ -1367,7 +1367,7 @@ verify_loop_structure (void)
   /* Check the recorded loop father and sizes of loops.  */
   visited = sbitmap_alloc (last_basic_block);
   bitmap_clear (visited);
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
+  bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   FOR_EACH_LOOP (li, loop, LI_FROM_INNERMOST)
     {
       unsigned n;
@@ -1379,7 +1379,7 @@ verify_loop_structure (void)
 	  continue;
 	}
 
-      n = get_loop_body_with_size (loop, bbs, n_basic_blocks);
+      n = get_loop_body_with_size (loop, bbs, n_basic_blocks_for_fn (cfun));
       if (loop->num_nodes != n)
 	{
 	  error ("size of loop %d should be %d, not %d",
diff --git a/gcc/cfgloopmanip.c b/gcc/cfgloopmanip.c
index be876db..3715e08 100644
--- a/gcc/cfgloopmanip.c
+++ b/gcc/cfgloopmanip.c
@@ -69,9 +69,9 @@ find_path (edge e, basic_block **bbs)
   gcc_assert (EDGE_COUNT (e->dest->preds) <= 1);
 
   /* Find bbs in the path.  */
-  *bbs = XNEWVEC (basic_block, n_basic_blocks);
+  *bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   return dfs_enumerate_from (e->dest, 0, rpe_enum_p, *bbs,
-			     n_basic_blocks, e->dest);
+			     n_basic_blocks_for_fn (cfun), e->dest);
 }
 
 /* Fix placement of basic block BB inside loop hierarchy --
@@ -341,7 +341,7 @@ remove_path (edge e)
   nrem = find_path (e, &rem_bbs);
 
   n_bord_bbs = 0;
-  bord_bbs = XNEWVEC (basic_block, n_basic_blocks);
+  bord_bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   seen = sbitmap_alloc (last_basic_block);
   bitmap_clear (seen);
 
@@ -448,8 +448,8 @@ add_loop (struct loop *loop, struct loop *outer)
   flow_loop_tree_node_add (outer, loop);
 
   /* Find its nodes.  */
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
-  n = get_loop_body_with_size (loop, bbs, n_basic_blocks);
+  bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
+  n = get_loop_body_with_size (loop, bbs, n_basic_blocks_for_fn (cfun));
 
   for (i = 0; i < n; i++)
     {
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index d6733a2..c61f0fb 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -361,7 +361,7 @@ rtl_create_basic_block (void *headp, void *endp, basic_block after)
       vec_safe_grow_cleared (basic_block_info, new_size);
     }
 
-  n_basic_blocks++;
+  n_basic_blocks_for_fn (cfun)++;
 
   bb = create_basic_block_structure (head, end, NULL, after);
   bb->aux = NULL;
@@ -500,7 +500,7 @@ make_pass_free_cfg (gcc::context *ctxt)
 rtx
 entry_of_function (void)
 {
-  return (n_basic_blocks > NUM_FIXED_BLOCKS ?
+  return (n_basic_blocks_for_fn (cfun) > NUM_FIXED_BLOCKS ?
 	  BB_HEAD (ENTRY_BLOCK_PTR->next_bb) : get_insns ());
 }
 
@@ -2917,10 +2917,10 @@ rtl_verify_bb_layout (void)
 	curr_bb = NULL;
     }
 
-  if (num_bb_notes != n_basic_blocks - NUM_FIXED_BLOCKS)
+  if (num_bb_notes != n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS)
     internal_error
       ("number of bb notes in insn chain (%d) != n_basic_blocks (%d)",
-       num_bb_notes, n_basic_blocks);
+       num_bb_notes, n_basic_blocks_for_fn (cfun));
 
    return err;
 }
@@ -4751,7 +4751,7 @@ rtl_flow_call_edges_add (sbitmap blocks)
   int last_bb = last_basic_block;
   bool check_last_block = false;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return 0;
 
   if (! blocks)
diff --git a/gcc/config/s390/s390.c b/gcc/config/s390/s390.c
index f0d6a59..5a13412 100644
--- a/gcc/config/s390/s390.c
+++ b/gcc/config/s390/s390.c
@@ -7963,7 +7963,7 @@ s390_optimize_nonescaping_tx (void)
   if (!cfun->machine->tbegin_p)
     return;
 
-  for (bb_index = 0; bb_index < n_basic_blocks; bb_index++)
+  for (bb_index = 0; bb_index < n_basic_blocks_for_fn (cfun); bb_index++)
     {
       bb = BASIC_BLOCK (bb_index);
 
diff --git a/gcc/config/spu/spu.c b/gcc/config/spu/spu.c
index 38c441d..6f56e48 100644
--- a/gcc/config/spu/spu.c
+++ b/gcc/config/spu/spu.c
@@ -2469,13 +2469,13 @@ spu_machine_dependent_reorg (void)
   compact_blocks ();
 
   spu_bb_info =
-    (struct spu_bb_info *) xcalloc (n_basic_blocks,
+    (struct spu_bb_info *) xcalloc (n_basic_blocks_for_fn (cfun),
 				    sizeof (struct spu_bb_info));
 
   /* We need exact insn addresses and lengths.  */
   shorten_branches (get_insns ());
 
-  for (i = n_basic_blocks - 1; i >= 0; i--)
+  for (i = n_basic_blocks_for_fn (cfun) - 1; i >= 0; i--)
     {
       bb = BASIC_BLOCK (i);
       branch = 0;
diff --git a/gcc/coverage.c b/gcc/coverage.c
index 9b0fc8b..ce87e3e 100644
--- a/gcc/coverage.c
+++ b/gcc/coverage.c
@@ -584,7 +584,7 @@ unsigned
 coverage_compute_cfg_checksum (void)
 {
   basic_block bb;
-  unsigned chksum = n_basic_blocks;
+  unsigned chksum = n_basic_blocks_for_fn (cfun);
 
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/cprop.c b/gcc/cprop.c
index 358fca9..78cfeba 100644
--- a/gcc/cprop.c
+++ b/gcc/cprop.c
@@ -1729,24 +1729,25 @@ is_too_expensive (const char *pass)
      which have a couple switch statements.  Rather than simply
      threshold the number of blocks, uses something with a more
      graceful degradation.  */
-  if (n_edges > 20000 + n_basic_blocks * 4)
+  if (n_edges > 20000 + n_basic_blocks_for_fn (cfun) * 4)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d edges/basic block",
-	       pass, n_basic_blocks, n_edges / n_basic_blocks);
+	       pass, n_basic_blocks_for_fn (cfun),
+	       n_edges / n_basic_blocks_for_fn (cfun));
 
       return true;
     }
 
   /* If allocating memory for the cprop bitmap would take up too much
      storage it's better just to disable the optimization.  */
-  if ((n_basic_blocks
+  if ((n_basic_blocks_for_fn (cfun)
        * SBITMAP_SET_SIZE (max_reg_num ())
        * sizeof (SBITMAP_ELT_TYPE)) > MAX_GCSE_MEMORY)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d registers",
-	       pass, n_basic_blocks, max_reg_num ());
+	       pass, n_basic_blocks_for_fn (cfun), max_reg_num ());
 
       return true;
     }
@@ -1763,7 +1764,7 @@ one_cprop_pass (void)
   int changed = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_ ("const/copy propagation disabled")))
     return 0;
 
@@ -1873,7 +1874,8 @@ one_cprop_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "CPROP of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), n_basic_blocks_for_fn (cfun),
+	       bytes_used);
       fprintf (dump_file, "%d local const props, %d local copy props, ",
 	       local_const_prop_count, local_copy_prop_count);
       fprintf (dump_file, "%d global const props, %d global copy props\n\n",
diff --git a/gcc/df-core.c b/gcc/df-core.c
index deea755..20d6c4e 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -1097,8 +1097,8 @@ df_worklist_dataflow_doublequeue (struct dataflow *dataflow,
     fprintf (dump_file, "df_worklist_dataflow_doublequeue:"
 	     "n_basic_blocks %d n_edges %d"
 	     " count %d (%5.2g)\n",
-	     n_basic_blocks, n_edges,
-	     dcount, dcount / (float)n_basic_blocks);
+	     n_basic_blocks_for_fn (cfun), n_edges,
+	     dcount, dcount / (float)n_basic_blocks_for_fn (cfun));
 }
 
 /* Worklist-based dataflow solver. It uses sbitmap as a worklist,
@@ -1606,7 +1606,7 @@ df_compact_blocks (void)
       i++;
     }
 
-  gcc_assert (i == n_basic_blocks);
+  gcc_assert (i == n_basic_blocks_for_fn (cfun));
 
   for (; i < last_basic_block; i++)
     SET_BASIC_BLOCK (i, NULL);
@@ -1714,7 +1714,7 @@ static int *
 df_compute_cfg_image (void)
 {
   basic_block bb;
-  int size = 2 + (2 * n_basic_blocks);
+  int size = 2 + (2 * n_basic_blocks_for_fn (cfun));
   int i;
   int * map;
 
diff --git a/gcc/dominance.c b/gcc/dominance.c
index 569f1f4..6530109 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -146,7 +146,7 @@ static void
 init_dom_info (struct dom_info *di, enum cdi_direction dir)
 {
   /* We need memory for n_basic_blocks nodes.  */
-  unsigned int num = n_basic_blocks;
+  unsigned int num = n_basic_blocks_for_fn (cfun);
   init_ar (di->dfs_parent, TBB, num, 0);
   init_ar (di->path_min, TBB, num, i);
   init_ar (di->key, TBB, num, i);
@@ -233,7 +233,7 @@ calc_dfs_tree_nonrec (struct dom_info *di, basic_block bb, bool reverse)
   /* Ending block.  */
   basic_block ex_block;
 
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Initialize our border blocks, and the first edge.  */
@@ -394,7 +394,7 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
   di->nodes = di->dfsnum - 1;
 
   /* This aborts e.g. when there is _no_ path from ENTRY to EXIT at all.  */
-  gcc_assert (di->nodes == (unsigned int) n_basic_blocks - 1);
+  gcc_assert (di->nodes == (unsigned int) n_basic_blocks_for_fn (cfun) - 1);
 }
 
 /* Compress the path from V to the root of its set and update path_min at the
@@ -652,7 +652,7 @@ calculate_dominance_info (enum cdi_direction dir)
 	{
 	  b->dom[dir_index] = et_new_tree (b);
 	}
-      n_bbs_in_dom_tree[dir_index] = n_basic_blocks;
+      n_bbs_in_dom_tree[dir_index] = n_basic_blocks_for_fn (cfun);
 
       init_dom_info (&di, dir);
       calc_dfs_tree (&di, reverse);
diff --git a/gcc/domwalk.c b/gcc/domwalk.c
index 4816b4c..4c7354e 100644
--- a/gcc/domwalk.c
+++ b/gcc/domwalk.c
@@ -150,13 +150,14 @@ void
 dom_walker::walk (basic_block bb)
 {
   basic_block dest;
-  basic_block *worklist = XNEWVEC (basic_block, n_basic_blocks * 2);
+  basic_block *worklist = XNEWVEC (basic_block,
+				   n_basic_blocks_for_fn (cfun) * 2);
   int sp = 0;
   int *postorder, postorder_num;
 
   if (m_dom_direction == CDI_DOMINATORS)
     {
-      postorder = XNEWVEC (int, n_basic_blocks);
+      postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
       postorder_num = inverted_post_order_compute (postorder);
       bb_postorder = XNEWVEC (int, last_basic_block);
       for (int i = 0; i < postorder_num; ++i)
diff --git a/gcc/function.c b/gcc/function.c
index a36f152..f6e5472 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -4039,7 +4039,7 @@ generate_setjmp_warnings (void)
 {
   bitmap setjmp_crosses = regstat_get_setjmp_crosses ();
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS
       || bitmap_empty_p (setjmp_crosses))
     return;
 
@@ -6026,7 +6026,7 @@ thread_prologue_and_epilogue_insns (void)
       /* Find the set of basic blocks that require a stack frame,
 	 and blocks that are too big to be duplicated.  */
 
-      vec.create (n_basic_blocks);
+      vec.create (n_basic_blocks_for_fn (cfun));
 
       CLEAR_HARD_REG_SET (set_up_by_prologue.set);
       add_to_hard_reg_set (&set_up_by_prologue.set, Pmode,
diff --git a/gcc/fwprop.c b/gcc/fwprop.c
index d08710c..da40a67 100644
--- a/gcc/fwprop.c
+++ b/gcc/fwprop.c
@@ -289,7 +289,7 @@ build_single_def_use_links (void)
   reg_defs.create (max_reg_num ());
   reg_defs.safe_grow_cleared (max_reg_num ());
 
-  reg_defs_stack.create (n_basic_blocks * 10);
+  reg_defs_stack.create (n_basic_blocks_for_fn (cfun) * 10);
   local_md = BITMAP_ALLOC (NULL);
   local_lr = BITMAP_ALLOC (NULL);
 
diff --git a/gcc/gcse.c b/gcc/gcse.c
index 571e878..5ed99bd 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -2662,7 +2662,7 @@ one_pre_gcse_pass (void)
   gcse_create_count = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_("PRE disabled")))
     return 0;
 
@@ -2708,7 +2708,8 @@ one_pre_gcse_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "PRE GCSE of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), n_basic_blocks_for_fn (cfun),
+	       bytes_used);
       fprintf (dump_file, "%d substs, %d insns created\n",
 	       gcse_subst_count, gcse_create_count);
     }
@@ -3591,7 +3592,7 @@ one_code_hoisting_pass (void)
   gcse_create_count = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_("GCSE disabled")))
     return 0;
 
@@ -3642,7 +3643,8 @@ one_code_hoisting_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "HOIST of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), n_basic_blocks_for_fn (cfun),
+	       bytes_used);
       fprintf (dump_file, "%d substs, %d insns created\n",
 	       gcse_subst_count, gcse_create_count);
     }
@@ -4067,24 +4069,25 @@ is_too_expensive (const char *pass)
      which have a couple switch statements.  Rather than simply
      threshold the number of blocks, uses something with a more
      graceful degradation.  */
-  if (n_edges > 20000 + n_basic_blocks * 4)
+  if (n_edges > 20000 + n_basic_blocks_for_fn (cfun) * 4)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d edges/basic block",
-	       pass, n_basic_blocks, n_edges / n_basic_blocks);
+	       pass, n_basic_blocks_for_fn (cfun),
+	       n_edges / n_basic_blocks_for_fn (cfun));
 
       return true;
     }
 
   /* If allocating memory for the dataflow bitmaps would take up too much
      storage it's better just to disable the optimization.  */
-  if ((n_basic_blocks
+  if ((n_basic_blocks_for_fn (cfun)
        * SBITMAP_SET_SIZE (max_reg_num ())
        * sizeof (SBITMAP_ELT_TYPE)) > MAX_GCSE_MEMORY)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d registers",
-	       pass, n_basic_blocks, max_reg_num ());
+	       pass, n_basic_blocks_for_fn (cfun), max_reg_num ());
 
       return true;
     }
diff --git a/gcc/graph.c b/gcc/graph.c
index 5c890e5..1dc9dbc 100644
--- a/gcc/graph.c
+++ b/gcc/graph.c
@@ -153,7 +153,7 @@ draw_cfg_node_succ_edges (pretty_printer *pp, int funcdef_no, basic_block bb)
 static void
 draw_cfg_nodes_no_loops (pretty_printer *pp, struct function *fun)
 {
-  int *rpo = XNEWVEC (int, n_basic_blocks_for_function (fun));
+  int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (fun));
   int i, n;
   sbitmap visited;
 
@@ -161,8 +161,8 @@ draw_cfg_nodes_no_loops (pretty_printer *pp, struct function *fun)
   bitmap_clear (visited);
 
   n = pre_and_rev_post_order_compute_fn (fun, NULL, rpo, true);
-  for (i = n_basic_blocks_for_function (fun) - n;
-       i < n_basic_blocks_for_function (fun); i++)
+  for (i = n_basic_blocks_for_fn (fun) - n;
+       i < n_basic_blocks_for_fn (fun); i++)
     {
       basic_block bb = BASIC_BLOCK (rpo[i]);
       draw_cfg_node (pp, fun->funcdef_no, bb);
@@ -170,7 +170,7 @@ draw_cfg_nodes_no_loops (pretty_printer *pp, struct function *fun)
     }
   free (rpo);
 
-  if (n != n_basic_blocks_for_function (fun))
+  if (n != n_basic_blocks_for_fn (fun))
     {
       /* Some blocks are unreachable.  We still want to dump them.  */
       basic_block bb;
diff --git a/gcc/graphite.c b/gcc/graphite.c
index 176c47c..ea54188 100644
--- a/gcc/graphite.c
+++ b/gcc/graphite.c
@@ -207,7 +207,8 @@ graphite_initialize (isl_ctx *ctx)
   if (number_of_loops (cfun) <= 1
       /* FIXME: This limit on the number of basic blocks of a function
 	 should be removed when the SCOP detection is faster.  */
-      || n_basic_blocks > PARAM_VALUE (PARAM_GRAPHITE_MAX_BBS_PER_FUNCTION))
+      || (n_basic_blocks_for_fn (cfun) >
+	  PARAM_VALUE (PARAM_GRAPHITE_MAX_BBS_PER_FUNCTION)))
     {
       if (dump_file && (dump_flags & TDF_DETAILS))
 	print_global_statistics (dump_file);
diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index 728d51b..cbda61c 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -6754,7 +6754,7 @@ haifa_sched_init (void)
      whole function.  */
   {
     bb_vec_t bbs;
-    bbs.create (n_basic_blocks);
+    bbs.create (n_basic_blocks_for_fn (cfun));
     basic_block bb;
 
     sched_init_bbs ();
diff --git a/gcc/ipa-inline-analysis.c b/gcc/ipa-inline-analysis.c
index 4458723..3658814 100644
--- a/gcc/ipa-inline-analysis.c
+++ b/gcc/ipa-inline-analysis.c
@@ -2395,7 +2395,7 @@ estimate_function_body_sizes (struct cgraph_node *node, bool early)
   if (parms_info)
     compute_bb_predicates (node, parms_info, info);
   gcc_assert (cfun == my_function);
-  order = XNEWVEC (int, n_basic_blocks);
+  order = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
   nblocks = pre_and_rev_post_order_compute (NULL, order, false);
   for (n = 0; n < nblocks; n++)
     {
diff --git a/gcc/ipa-utils.c b/gcc/ipa-utils.c
index 8410816..dff4c72 100644
--- a/gcc/ipa-utils.c
+++ b/gcc/ipa-utils.c
@@ -700,8 +700,8 @@ ipa_merge_profiles (struct cgraph_node *dst,
   cgraph_get_body (dst);
   srccfun = DECL_STRUCT_FUNCTION (src->decl);
   dstcfun = DECL_STRUCT_FUNCTION (dst->decl);
-  if (n_basic_blocks_for_function (srccfun)
-      != n_basic_blocks_for_function (dstcfun))
+  if (n_basic_blocks_for_fn (srccfun)
+      != n_basic_blocks_for_fn (dstcfun))
     {
       if (cgraph_dump_file)
 	fprintf (cgraph_dump_file,
diff --git a/gcc/ira-build.c b/gcc/ira-build.c
index ed51376..ca6f64d 100644
--- a/gcc/ira-build.c
+++ b/gcc/ira-build.c
@@ -3496,7 +3496,7 @@ ira_build (void)
 	}
       fprintf (ira_dump_file, "  regions=%d, blocks=%d, points=%d\n",
 	       current_loops == NULL ? 1 : number_of_loops (cfun),
-	       n_basic_blocks, ira_max_point);
+	       n_basic_blocks_for_fn (cfun), ira_max_point);
       fprintf (ira_dump_file,
 	       "    allocnos=%d (big %d), copies=%d, conflicts=%d, ranges=%d\n",
 	       ira_allocnos_num, nr_big, ira_copies_num, n, nr);
diff --git a/gcc/lcm.c b/gcc/lcm.c
index c13d2a6..6266d48 100644
--- a/gcc/lcm.c
+++ b/gcc/lcm.c
@@ -101,7 +101,7 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
   /* Allocate a worklist array/queue.  Entries are only added to the
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
-  qin = qout = worklist = XNEWVEC (basic_block, n_basic_blocks);
+  qin = qout = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
 
   /* We want a maximal solution, so make an optimistic initialization of
      ANTIN.  */
@@ -116,8 +116,8 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
     }
 
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
+  qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   /* Mark blocks which are predecessors of the exit block so that we
      can easily identify them below.  */
@@ -254,7 +254,7 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
   qin = qout = worklist
-    = XNEWVEC (basic_block, n_basic_blocks);
+    = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
 
   /* Initialize a mapping from each edge to its index.  */
   for (i = 0; i < num_edges; i++)
@@ -290,8 +290,8 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
   /* Note that we do not use the last allocated element for our queue,
      as EXIT_BLOCK is never inserted into it. */
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
+  qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   /* Iterate until the worklist is empty.  */
   while (qlen)
@@ -481,7 +481,7 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
   qin = qout = worklist =
-    XNEWVEC (basic_block, n_basic_blocks - NUM_FIXED_BLOCKS);
+    XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
 
   /* We want a maximal solution.  */
   bitmap_vector_ones (avout, last_basic_block);
@@ -495,8 +495,8 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
     }
 
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
+  qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   /* Mark blocks which are successors of the entry block so that we
      can easily identify them below.  */
@@ -610,7 +610,7 @@ compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
   /* Allocate a worklist array/queue.  Entries are only added to the
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
   /* Initialize NEARER for each edge and build a mapping from an edge to
      its index.  */
diff --git a/gcc/lra-lives.c b/gcc/lra-lives.c
index f3bad97..2839c5c 100644
--- a/gcc/lra-lives.c
+++ b/gcc/lra-lives.c
@@ -998,7 +998,7 @@ lra_create_live_ranges (bool all_p)
   lra_point_freq = point_freq_vec.address ();
   int *post_order_rev_cfg = XNEWVEC (int, last_basic_block);
   int n_blocks_inverted = inverted_post_order_compute (post_order_rev_cfg);
-  lra_assert (n_blocks_inverted == n_basic_blocks);
+  lra_assert (n_blocks_inverted == n_basic_blocks_for_fn (cfun));
   for (i = n_blocks_inverted - 1; i >= 0; --i)
     {
       bb = BASIC_BLOCK (post_order_rev_cfg[i]);
diff --git a/gcc/lra.c b/gcc/lra.c
index 1aea599..3c8b71d 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -2059,7 +2059,7 @@ has_nonexceptional_receiver (void)
     return true;
 
   /* First determine which blocks can reach exit via normal paths.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
   FOR_EACH_BB (bb)
     bb->flags &= ~BB_REACHABLE;
diff --git a/gcc/lto-streamer-in.c b/gcc/lto-streamer-in.c
index d4a52a7..fa537b2 100644
--- a/gcc/lto-streamer-in.c
+++ b/gcc/lto-streamer-in.c
@@ -586,7 +586,7 @@ make_new_block (struct function *fn, unsigned int index)
   basic_block bb = alloc_block ();
   bb->index = index;
   SET_BASIC_BLOCK_FOR_FUNCTION (fn, index, bb);
-  n_basic_blocks_for_function (fn)++;
+  n_basic_blocks_for_fn (fn)++;
   return bb;
 }
 
diff --git a/gcc/mcf.c b/gcc/mcf.c
index 52020b8..e0e40d8 100644
--- a/gcc/mcf.c
+++ b/gcc/mcf.c
@@ -471,12 +471,12 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
   int fnum_edges;
 
   /* Each basic_block will be split into 2 during vertex transformation.  */
-  int fnum_vertices_after_transform =  2 * n_basic_blocks;
-  int fnum_edges_after_transform = n_edges + n_basic_blocks;
+  int fnum_vertices_after_transform =  2 * n_basic_blocks_for_fn (cfun);
+  int fnum_edges_after_transform = n_edges + n_basic_blocks_for_fn (cfun);
 
   /* Count the new SOURCE and EXIT vertices to be added.  */
   int fmax_num_vertices =
-    fnum_vertices_after_transform + n_edges + n_basic_blocks + 2;
+    fnum_vertices_after_transform + n_edges + n_basic_blocks_for_fn (cfun) + 2;
 
   /* In create_fixup_graph: Each basic block and edge can be split into 3
      edges. Number of balance edges = n_basic_blocks. So after
@@ -486,10 +486,10 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
      max_edges = 2 * (4 * n_basic_blocks + 3 * n_edges)
      = 8 * n_basic_blocks + 6 * n_edges
      < 8 * n_basic_blocks + 8 * n_edges.  */
-  int fmax_num_edges = 8 * (n_basic_blocks + n_edges);
+  int fmax_num_edges = 8 * (n_basic_blocks_for_fn (cfun) + n_edges);
 
   /* Initial num of vertices in the fixup graph.  */
-  fixup_graph->num_vertices = n_basic_blocks;
+  fixup_graph->num_vertices = n_basic_blocks_for_fn (cfun);
 
   /* Fixup graph vertex list.  */
   fixup_graph->vertex_list =
@@ -508,7 +508,8 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
   FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
     total_vertex_weight += bb->count;
 
-  sqrt_avg_vertex_weight = mcf_sqrt (total_vertex_weight / n_basic_blocks);
+  sqrt_avg_vertex_weight = mcf_sqrt (total_vertex_weight /
+				     n_basic_blocks_for_fn (cfun));
 
   k_pos = K_POS (sqrt_avg_vertex_weight);
   k_neg = K_NEG (sqrt_avg_vertex_weight);
diff --git a/gcc/profile.c b/gcc/profile.c
index 7118ac8..fb547ad 100644
--- a/gcc/profile.c
+++ b/gcc/profile.c
@@ -1156,9 +1156,9 @@ branch_prob (void)
 	num_instrumented++;
     }
 
-  total_num_blocks += n_basic_blocks;
+  total_num_blocks += n_basic_blocks_for_fn (cfun);
   if (dump_file)
-    fprintf (dump_file, "%d basic blocks\n", n_basic_blocks);
+    fprintf (dump_file, "%d basic blocks\n", n_basic_blocks_for_fn (cfun));
 
   total_num_edges += num_edges;
   if (dump_file)
@@ -1187,7 +1187,7 @@ branch_prob (void)
 
       /* Basic block flags */
       offset = gcov_write_tag (GCOV_TAG_BLOCKS);
-      for (i = 0; i != (unsigned) (n_basic_blocks); i++)
+      for (i = 0; i != (unsigned) (n_basic_blocks_for_fn (cfun)); i++)
 	gcov_write_unsigned (0);
       gcov_write_length (offset);
 
diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c
index 1917c46..3740934 100644
--- a/gcc/reg-stack.c
+++ b/gcc/reg-stack.c
@@ -3080,7 +3080,7 @@ convert_regs_2 (basic_block block)
      is only processed after all its predecessors.  The number of predecessors
      of every block has already been computed.  */
 
-  stack = XNEWVEC (basic_block, n_basic_blocks);
+  stack = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   sp = stack;
 
   *sp++ = block;
diff --git a/gcc/regrename.c b/gcc/regrename.c
index 5b2c857..5e86fa5 100644
--- a/gcc/regrename.c
+++ b/gcc/regrename.c
@@ -672,7 +672,7 @@ regrename_analyze (bitmap bb_mask)
   n_bbs = pre_and_rev_post_order_compute (NULL, inverse_postorder, false);
 
   /* Gather some information about the blocks in this function.  */
-  rename_info = XCNEWVEC (struct bb_rename_info, n_basic_blocks);
+  rename_info = XCNEWVEC (struct bb_rename_info, n_basic_blocks_for_fn (cfun));
   i = 0;
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/reload1.c b/gcc/reload1.c
index 204685d..077ee76 100644
--- a/gcc/reload1.c
+++ b/gcc/reload1.c
@@ -611,7 +611,7 @@ has_nonexceptional_receiver (void)
     return true;
 
   /* First determine which blocks can reach exit via normal paths.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
   FOR_EACH_BB (bb)
     bb->flags &= ~BB_REACHABLE;
diff --git a/gcc/reorg.c b/gcc/reorg.c
index e9aa889..fe6a751 100644
--- a/gcc/reorg.c
+++ b/gcc/reorg.c
@@ -3642,7 +3642,7 @@ dbr_schedule (rtx first)
 
   /* If the current function has no insns other than the prologue and
      epilogue, then do not try to fill any delay slots.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return;
 
   /* Find the highest INSN_UID and allocate and initialize our map from
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index 8496014..287b826 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -3963,7 +3963,7 @@ sched_deps_init (bool global_p)
 {
   /* Average number of insns in the basic block.
      '+ 1' is used to make it nonzero.  */
-  int insns_in_block = sched_max_luid / n_basic_blocks + 1;
+  int insns_in_block = sched_max_luid / n_basic_blocks_for_fn (cfun) + 1;
 
   init_deps_data_vector ();
 
diff --git a/gcc/sched-ebb.c b/gcc/sched-ebb.c
index b70e071..8d23e33 100644
--- a/gcc/sched-ebb.c
+++ b/gcc/sched-ebb.c
@@ -625,7 +625,7 @@ schedule_ebbs (void)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return;
 
   if (profile_info && flag_branch_probabilities)
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index b2a7dbd..20c29c5 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -793,7 +793,7 @@ haifa_find_rgns (void)
       /* Second traversal:find reducible inner loops and topologically sort
 	 block of each region.  */
 
-      queue = XNEWVEC (int, n_basic_blocks);
+      queue = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
 
       extend_regions_p = PARAM_VALUE (PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS) > 0;
       if (extend_regions_p)
@@ -1153,7 +1153,7 @@ void
 extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 {
   int *order, i, rescan = 0, idx = *idxp, iter = 0, max_iter, *max_hdr;
-  int nblocks = n_basic_blocks - NUM_FIXED_BLOCKS;
+  int nblocks = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   max_iter = PARAM_VALUE (PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS);
 
@@ -3115,7 +3115,7 @@ sched_rgn_init (bool single_blocks_p)
 
   /* Compute regions for scheduling.  */
   if (single_blocks_p
-      || n_basic_blocks == NUM_FIXED_BLOCKS + 1
+      || n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS + 1
       || !flag_schedule_interblock
       || is_cfg_nonregular ())
     {
@@ -3139,7 +3139,7 @@ sched_rgn_init (bool single_blocks_p)
 	free_dominance_info (CDI_DOMINATORS);
     }
 
-  gcc_assert (0 < nr_regions && nr_regions <= n_basic_blocks);
+  gcc_assert (0 < nr_regions && nr_regions <= n_basic_blocks_for_fn (cfun));
 
   RGN_BLOCKS (nr_regions) = (RGN_BLOCKS (nr_regions - 1) +
 			     RGN_NR_BLOCKS (nr_regions - 1));
@@ -3375,7 +3375,7 @@ schedule_insns (void)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return;
 
   rgn_setup_common_sched_info ();
@@ -3421,8 +3421,8 @@ rgn_add_remove_insn (rtx insn, int remove_p)
 void
 extend_regions (void)
 {
-  rgn_table = XRESIZEVEC (region, rgn_table, n_basic_blocks);
-  rgn_bb_table = XRESIZEVEC (int, rgn_bb_table, n_basic_blocks);
+  rgn_table = XRESIZEVEC (region, rgn_table, n_basic_blocks_for_fn (cfun));
+  rgn_bb_table = XRESIZEVEC (int, rgn_bb_table, n_basic_blocks_for_fn (cfun));
   block_to_bb = XRESIZEVEC (int, block_to_bb, last_basic_block);
   containing_rgn = XRESIZEVEC (int, containing_rgn, last_basic_block);
 }
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index 4eb27c5..90bf1e2 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -3649,7 +3649,7 @@ sel_recompute_toporder (void)
   int i, n, rgn;
   int *postorder, n_blocks;
 
-  postorder = XALLOCAVEC (int, n_basic_blocks);
+  postorder = XALLOCAVEC (int, n_basic_blocks_for_fn (cfun));
   n_blocks = post_order_compute (postorder, false, false);
 
   rgn = CONTAINING_RGN (BB_TO_BLOCK (0));
@@ -4912,10 +4912,10 @@ recompute_rev_top_order (void)
                                         rev_top_order_index_len);
     }
 
-  postorder = XNEWVEC (int, n_basic_blocks);
+  postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
 
   n_blocks = post_order_compute (postorder, true, false);
-  gcc_assert (n_basic_blocks == n_blocks);
+  gcc_assert (n_basic_blocks_for_fn (cfun) == n_blocks);
 
   /* Build reverse function: for each basic block with BB->INDEX == K
      rev_top_order_index[K] is it's reverse topological sort number.  */
diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c
index 08fdc77..c2d4185 100644
--- a/gcc/sel-sched.c
+++ b/gcc/sel-sched.c
@@ -7764,7 +7764,7 @@ run_selective_scheduling (void)
 {
   int rgn;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return;
 
   sel_global_init ();
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index 68f293c..ffbeed2 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -848,7 +848,7 @@ remove_reachable_equiv_notes (basic_block bb, struct st_expr *smexpr)
   rtx last, insn, note;
   rtx mem = smexpr->pattern;
 
-  stack = XNEWVEC (edge_iterator, n_basic_blocks);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun));
   sp = 0;
   ei = ei_start (bb->succs);
 
@@ -1208,7 +1208,7 @@ one_store_motion_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "STORE_MOTION of %s, %d basic blocks, ",
-	       current_function_name (), n_basic_blocks);
+	       current_function_name (), n_basic_blocks_for_fn (cfun));
       fprintf (dump_file, "%d insns deleted, %d insns created\n",
 	       n_stores_deleted, n_stores_created);
     }
diff --git a/gcc/tracer.c b/gcc/tracer.c
index 86557fe..400ee46 100644
--- a/gcc/tracer.c
+++ b/gcc/tracer.c
@@ -226,7 +226,7 @@ static bool
 tail_duplicate (void)
 {
   fibnode_t *blocks = XCNEWVEC (fibnode_t, last_basic_block);
-  basic_block *trace = XNEWVEC (basic_block, n_basic_blocks);
+  basic_block *trace = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   int *counts = XNEWVEC (int, last_basic_block);
   int ninsns = 0, nduplicated = 0;
   gcov_type weighted_insns = 0, traced_insns = 0;
@@ -370,7 +370,7 @@ tracer (void)
 {
   bool changed;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   mark_dfs_back_edges ();
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index d646693..b32f8ea 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -170,7 +170,7 @@ init_empty_tree_cfg_for_function (struct function *fn)
   /* Initialize the basic block array.  */
   init_flow (fn);
   profile_status_for_function (fn) = PROFILE_ABSENT;
-  n_basic_blocks_for_function (fn) = NUM_FIXED_BLOCKS;
+  n_basic_blocks_for_fn (fn) = NUM_FIXED_BLOCKS;
   last_basic_block_for_function (fn) = NUM_FIXED_BLOCKS;
   vec_alloc (basic_block_info_for_function (fn), initial_cfg_capacity);
   vec_safe_grow_cleared (basic_block_info_for_function (fn),
@@ -227,12 +227,12 @@ build_gimple_cfg (gimple_seq seq)
     factor_computed_gotos ();
 
   /* Make sure there is always at least one block, even if it's empty.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     create_empty_bb (ENTRY_BLOCK_PTR);
 
   /* Adjust the size of the array.  */
-  if (basic_block_info->length () < (size_t) n_basic_blocks)
-    vec_safe_grow_cleared (basic_block_info, n_basic_blocks);
+  if (basic_block_info->length () < (size_t) n_basic_blocks_for_fn (cfun))
+    vec_safe_grow_cleared (basic_block_info, n_basic_blocks_for_fn (cfun));
 
   /* To speed up statement iterator walks, we first purge dead labels.  */
   cleanup_dead_labels ();
@@ -602,7 +602,7 @@ create_bb (void *h, void *e, basic_block after)
   /* Add the newly created block to the array.  */
   SET_BASIC_BLOCK (last_basic_block, bb);
 
-  n_basic_blocks++;
+  n_basic_blocks_for_fn (cfun)++;
   last_basic_block++;
 
   return bb;
@@ -2100,7 +2100,7 @@ gimple_dump_cfg (FILE *file, int flags)
     {
       dump_function_header (file, current_function_decl, flags);
       fprintf (file, ";; \n%d basic blocks, %d edges, last basic block %d.\n\n",
-	       n_basic_blocks, n_edges, last_basic_block);
+	       n_basic_blocks_for_fn (cfun), n_edges, last_basic_block);
 
       brief_dump_cfg (file, flags | TDF_COMMENT);
       fprintf (file, "\n");
@@ -2135,9 +2135,9 @@ dump_cfg_stats (FILE *file)
   fprintf (file, fmt_str, "", "  instances  ", "used ");
   fprintf (file, "---------------------------------------------------------\n");
 
-  size = n_basic_blocks * sizeof (struct basic_block_def);
+  size = n_basic_blocks_for_fn (cfun) * sizeof (struct basic_block_def);
   total += size;
-  fprintf (file, fmt_str_1, "Basic blocks", n_basic_blocks,
+  fprintf (file, fmt_str_1, "Basic blocks", n_basic_blocks_for_fn (cfun),
 	   SCALE (size), LABEL (size));
 
   num_edges = 0;
@@ -7025,7 +7025,7 @@ dump_function_to_file (tree fndecl, FILE *file, int flags)
       if (!ignore_topmost_bind)
 	fprintf (file, "{\n");
 
-      if (any_var && n_basic_blocks_for_function (fun))
+      if (any_var && n_basic_blocks_for_fn (fun))
 	fprintf (file, "\n");
 
       FOR_EACH_BB_FN (bb, fun)
@@ -7403,7 +7403,7 @@ gimple_flow_call_edges_add (sbitmap blocks)
   int last_bb = last_basic_block;
   bool check_last_block = false;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return 0;
 
   if (! blocks)
diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c
index c627d2c..0863f16 100644
--- a/gcc/tree-cfgcleanup.c
+++ b/gcc/tree-cfgcleanup.c
@@ -903,7 +903,7 @@ remove_forwarder_block_with_phi (basic_block bb)
 static unsigned int
 merge_phi_nodes (void)
 {
-  basic_block *worklist = XNEWVEC (basic_block, n_basic_blocks);
+  basic_block *worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   basic_block *current = worklist;
   basic_block bb;
 
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index 77013b3..3edcaa4 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -4380,7 +4380,7 @@ gimple_expand_calls_inline (basic_block bb, copy_body_data *id)
 static void
 fold_marked_statements (int first, struct pointer_set_t *statements)
 {
-  for (; first < n_basic_blocks; first++)
+  for (; first < n_basic_blocks_for_fn (cfun); first++)
     if (BASIC_BLOCK (first))
       {
         gimple_stmt_iterator gsi;
@@ -4483,7 +4483,7 @@ optimize_inline_calls (tree fn)
 {
   copy_body_data id;
   basic_block bb;
-  int last = n_basic_blocks;
+  int last = n_basic_blocks_for_fn (cfun);
   struct gimplify_ctx gctx;
   bool inlined_p = false;
 
diff --git a/gcc/tree-ssa-ifcombine.c b/gcc/tree-ssa-ifcombine.c
index 73ebfe8..558f15f 100644
--- a/gcc/tree-ssa-ifcombine.c
+++ b/gcc/tree-ssa-ifcombine.c
@@ -677,7 +677,7 @@ tree_ssa_ifcombine (void)
      inner ones, and also that we do not try to visit a removed
      block.  This is opposite of PHI-OPT, because we cascade the
      combining rather than cascading PHIs. */
-  for (i = n_basic_blocks - NUM_FIXED_BLOCKS - 1; i >= 0; i--)
+  for (i = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS - 1; i >= 0; i--)
     {
       basic_block bb = bbs[i];
       gimple stmt = last_stmt (bb);
diff --git a/gcc/tree-ssa-loop-ch.c b/gcc/tree-ssa-loop-ch.c
index b74c56d..4591b91 100644
--- a/gcc/tree-ssa-loop-ch.c
+++ b/gcc/tree-ssa-loop-ch.c
@@ -145,9 +145,9 @@ copy_loop_headers (void)
       return 0;
     }
 
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
-  copied_bbs = XNEWVEC (basic_block, n_basic_blocks);
-  bbs_size = n_basic_blocks;
+  bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
+  copied_bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
+  bbs_size = n_basic_blocks_for_fn (cfun);
 
   FOR_EACH_LOOP (li, loop, 0)
     {
diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
index 2283b5b..aff9573 100644
--- a/gcc/tree-ssa-loop-im.c
+++ b/gcc/tree-ssa-loop-im.c
@@ -1591,7 +1591,7 @@ analyze_memory_references (void)
   /* Collect all basic-blocks in loops and sort them after their
      loops postorder.  */
   i = 0;
-  bbs = XNEWVEC (basic_block, n_basic_blocks - NUM_FIXED_BLOCKS);
+  bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
   FOR_EACH_BB (bb)
     if (bb->loop_father != current_loops->tree_root)
       bbs[i++] = bb;
diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
index 2bb2253..0f3da98 100644
--- a/gcc/tree-ssa-loop-manip.c
+++ b/gcc/tree-ssa-loop-manip.c
@@ -191,7 +191,7 @@ compute_live_loop_exits (bitmap live_exits, bitmap use_blocks,
   /* Normally the work list size is bounded by the number of basic
      blocks in the largest loop.  We don't know this number, but we
      can be fairly sure that it will be relatively small.  */
-  worklist.create (MAX (8, n_basic_blocks / 128));
+  worklist.create (MAX (8, n_basic_blocks_for_fn (cfun) / 128));
 
   EXECUTE_IF_SET_IN_BITMAP (use_blocks, 0, i, bi)
     {
diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c
index 9a29411..81aa843 100644
--- a/gcc/tree-ssa-math-opts.c
+++ b/gcc/tree-ssa-math-opts.c
@@ -510,7 +510,7 @@ execute_cse_reciprocals (void)
 
   occ_pool = create_alloc_pool ("dominators for recip",
 				sizeof (struct occurrence),
-				n_basic_blocks / 3 + 1);
+				n_basic_blocks_for_fn (cfun) / 3 + 1);
 
   memset (&reciprocal_stats, 0, sizeof (reciprocal_stats));
   calculate_dominance_info (CDI_DOMINATORS);
diff --git a/gcc/tree-ssa-phiopt.c b/gcc/tree-ssa-phiopt.c
index ef114a0..b000ec6 100644
--- a/gcc/tree-ssa-phiopt.c
+++ b/gcc/tree-ssa-phiopt.c
@@ -335,7 +335,7 @@ tree_ssa_phiopt_worker (bool do_store_elim, bool do_hoist_loads)
      outer ones, and also that we do not try to visit a removed
      block.  */
   bb_order = single_pred_before_succ_order ();
-  n = n_basic_blocks - NUM_FIXED_BLOCKS;
+  n = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   for (i = 0; i < n; i++)
     {
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index 1f5ff23..22a95bb 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -3721,7 +3721,7 @@ compute_avail (void)
     }
 
   /* Allocate the worklist.  */
-  worklist = XNEWVEC (basic_block, n_basic_blocks);
+  worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
 
   /* Seed the algorithm by putting the dominator children of the entry
      block on the worklist.  */
@@ -4652,7 +4652,7 @@ init_pre (void)
   connect_infinite_loops_to_exit ();
   memset (&pre_stats, 0, sizeof (pre_stats));
 
-  postorder = XNEWVEC (int, n_basic_blocks);
+  postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
   postorder_num = inverted_post_order_compute (postorder);
 
   alloc_aux_for_blocks (sizeof (struct bb_bitmap_sets));
@@ -4728,7 +4728,7 @@ do_pre (void)
      fixed, don't run it when he have an incredibly large number of
      bb's.  If we aren't going to run insert, there is no point in
      computing ANTIC, either, even though it's plenty fast.  */
-  if (n_basic_blocks < 4000)
+  if (n_basic_blocks_for_fn (cfun) < 4000)
     {
       compute_antic ();
       insert ();
diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c
index 538a8ef..a79d1c1 100644
--- a/gcc/tree-ssa-reassoc.c
+++ b/gcc/tree-ssa-reassoc.c
@@ -4535,7 +4535,7 @@ init_reassoc (void)
 {
   int i;
   long rank = 2;
-  int *bbs = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  int *bbs = XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
 
   /* Find the loops, so that we can prevent moving calculations in
      them.  */
@@ -4565,7 +4565,7 @@ init_reassoc (void)
     }
 
   /* Set up rank for each BB  */
-  for (i = 0; i < n_basic_blocks - NUM_FIXED_BLOCKS; i++)
+  for (i = 0; i < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; i++)
     bb_rank[bbs[i]] = ++rank  << 16;
 
   free (bbs);
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index ed4e1db..2930054 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -3972,13 +3972,14 @@ init_scc_vn (void)
   shared_lookup_phiargs.create (0);
   shared_lookup_references.create (0);
   rpo_numbers = XNEWVEC (int, last_basic_block);
-  rpo_numbers_temp = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  rpo_numbers_temp =
+    XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
   pre_and_rev_post_order_compute (NULL, rpo_numbers_temp, false);
 
   /* RPO numbers is an array of rpo ordering, rpo[i] = bb means that
      the i'th block in RPO order is bb.  We want to map bb's to RPO
      numbers, so we need to rearrange this array.  */
-  for (j = 0; j < n_basic_blocks - NUM_FIXED_BLOCKS; j++)
+  for (j = 0; j < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; j++)
     rpo_numbers[rpo_numbers_temp[j]] = j;
 
   XDELETE (rpo_numbers_temp);
diff --git a/gcc/tree-ssa-tail-merge.c b/gcc/tree-ssa-tail-merge.c
index db95ce1..07e7da8 100644
--- a/gcc/tree-ssa-tail-merge.c
+++ b/gcc/tree-ssa-tail-merge.c
@@ -761,11 +761,11 @@ static void
 init_worklist (void)
 {
   alloc_aux_for_blocks (sizeof (struct aux_bb_info));
-  same_succ_htab.create (n_basic_blocks);
+  same_succ_htab.create (n_basic_blocks_for_fn (cfun));
   same_succ_edge_flags = XCNEWVEC (int, last_basic_block);
   deleted_bbs = BITMAP_ALLOC (NULL);
   deleted_bb_preds = BITMAP_ALLOC (NULL);
-  worklist.create (n_basic_blocks);
+  worklist.create (n_basic_blocks_for_fn (cfun));
   find_same_succ ();
 
   if (dump_file && (dump_flags & TDF_DETAILS))
@@ -993,7 +993,7 @@ static vec<bb_cluster> all_clusters;
 static void
 alloc_cluster_vectors (void)
 {
-  all_clusters.create (n_basic_blocks);
+  all_clusters.create (n_basic_blocks_for_fn (cfun));
 }
 
 /* Reset all cluster vectors.  */
diff --git a/gcc/tree-ssa-uncprop.c b/gcc/tree-ssa-uncprop.c
index 5255d7fb..5a65fd9 100644
--- a/gcc/tree-ssa-uncprop.c
+++ b/gcc/tree-ssa-uncprop.c
@@ -192,7 +192,7 @@ associate_equivalences_with_edges (void)
 
 	      /* Now walk over the blocks to determine which ones were
 		 marked as being reached by a useful case label.  */
-	      for (i = 0; i < n_basic_blocks; i++)
+	      for (i = 0; i < n_basic_blocks_for_fn (cfun); i++)
 		{
 		  tree node = info[i];
 
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 8b07f9f..700d42f 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -838,7 +838,7 @@ vt_stack_adjustments (void)
   VTI (ENTRY_BLOCK_PTR)->out.stack_adjust = INCOMING_FRAME_SP_OFFSET;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Push the first edge on to the stack.  */
@@ -6904,10 +6904,10 @@ vt_find_locations (void)
   timevar_push (TV_VAR_TRACKING_DATAFLOW);
   /* Compute reverse completion order of depth first search of the CFG
      so that the data-flow runs faster.  */
-  rc_order = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  rc_order = XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
   bb_order = XNEWVEC (int, last_basic_block);
   pre_and_rev_post_order_compute (NULL, rc_order, false);
-  for (i = 0; i < n_basic_blocks - NUM_FIXED_BLOCKS; i++)
+  for (i = 0; i < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; i++)
     bb_order[rc_order[i]] = i;
   free (rc_order);
 
@@ -10157,7 +10157,8 @@ variable_tracking_main_1 (void)
       return 0;
     }
 
-  if (n_basic_blocks > 500 && n_edges / n_basic_blocks >= 20)
+  if (n_basic_blocks_for_fn (cfun) > 500 &&
+      n_edges / n_basic_blocks_for_fn (cfun) >= 20)
     {
       vt_debug_insns_local (true);
       return 0;

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Avoid some unnecessary set_cfun calls
  2013-11-13 11:27   ` Jakub Jelinek
  2013-11-13 11:38     ` Richard Biener
@ 2013-11-16 12:58     ` Richard Sandiford
  1 sibling, 0 replies; 42+ messages in thread
From: Richard Sandiford @ 2013-11-16 12:58 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: Richard Biener, Michael Meissner, gcc-patches

Jakub Jelinek <jakub@redhat.com> writes:
> On Wed, Nov 13, 2013 at 11:27:10AM +0100, Richard Biener wrote:
>> > Also, I wonder if we couldn't defer the expensive ira_init, if the info
>> > computed by it is used only during RTL optimization passes (haven't verified
>> > it yet), then supposedly we could just remember using some target hook
>> > what the last state was when we did ira_init last time, and call ira_init
>> > again at the start of expansion or so if it is different from the last time.
>> > For i?86/x86_64/ppc* this would be whether the current function's
>> > DECL_FUNCTION_SPECIFIC_TARGET is the same as one for which ira_init has been
>> > called, for rx whether interrupt attribute is the same and for mips whatever
>> > is needed.
>> 
>> I wonder why we cannot move all the stuff we re-init to a member
>> of struct function (or rather have a pointer to that info there
>> to cache it across functions with the same options).  That is,
>> get rid of more global state?  That would make switching back
>> and forth cheaper.
>
> Isn't that what the SWITCHABLE_TARGET stuff is all about?
> So, perhaps we should just define SWITCHABLE_TARGET on i?86/x86_64/powerpc*
> (and rx if maintainer cares) and tweak it to attach somehow
> struct target_globals * to TARGET_OPTION_NODE somehow.
> A problem might be that lots of the save_target_globals
> allocated structures are heap allocated rather than GC, so we might leak
> memory.  Wonder if save_target_globals couldn't just compute the
> aggregate size of all the structures it allocates with XCNEW right now
> (plus required alignment if needed) and just allocate them together
> with the ggc_alloc_target_globals after the target_globals structure
> itself.

Yeah, that might be worth doing.  I think the only non-GCed structures
with subpointers are target_ira_int and target_lra_int, but we could
probably convert them to GCed structures.  (And perhaps use the same
technique recursively.  E.g. LRA could work out the maximum number of
operand_alternative structures needed and allocate them in one go.)

Thanks,
Richard

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Eliminate n_basic_blocks macro (was Re: [PATCH] Avoid some unnecessary set_cfun calls)
  2013-11-16 10:49         ` [PATCH] Eliminate n_basic_blocks macro (was Re: [PATCH] Avoid some unnecessary set_cfun calls) David Malcolm
@ 2013-11-19  5:27           ` David Malcolm
  2013-11-19  9:19             ` Richard Biener
  0 siblings, 1 reply; 42+ messages in thread
From: David Malcolm @ 2013-11-19  5:27 UTC (permalink / raw)
  To: Richard Biener; +Cc: Martin Jambor, Jakub Jelinek, gcc-patches

On Fri, 2013-11-15 at 20:38 -0500, David Malcolm wrote:
> On Wed, 2013-11-13 at 14:44 +0100, Richard Biener wrote:
> > On Wed, 13 Nov 2013, David Malcolm wrote:
> > 
> > > On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
> > > > On Wed, 13 Nov 2013, Martin Jambor wrote:
> > > > 
> > > > > Hi,
> > > > > 
> > > > > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > > > > Hi!
> > > > > > 
> > > > > > void f1 (void) {}
> > > > > > __attribute__((target ("avx"))) void f2 (void) {}
> > > > > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > > > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > > > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > > > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > > > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > > > > expensive and there are hundreds of such calls.
> > > > > > The following patch is just a quick change to avoid some of them:
> > > > > > execute_function_todo starts with:
> > > > > >   unsigned int flags = (size_t)data;
> > > > > >   flags &= ~cfun->last_verified;
> > > > > >   if (!flags)
> > > > > >     return;
> > > > > > and if flags is initially zero, it does nothing.
> > > > > > Similarly, execute_function_dump has the whole body surrounded by
> > > > > >   if (dump_file && current_function_decl)
> > > > > > and thus if dump_file is NULL, there is nothing to do.
> > > > > > So IMHO in neither case (which happens pretty frequently) we need to
> > > > > > set_cfun to every function during IPA.
> > > > > > 
> > > > > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > > > > computed by it is used only during RTL optimization passes (haven't verified
> > > > > > it yet), then supposedly we could just remember using some target hook
> > > > > > what the last state was when we did ira_init last time, and call ira_init
> > > > > > again at the start of expansion or so if it is different from the
> > > > > > last time.
> > > > > 
> > > > > I was wondering whether the expensive parts of set_cfun could only be
> > > > > run in pass_all_optimizations (and the -Og equivalent) but not when
> > > > > changing functions in early and IPA passes.
> > > > 
> > > > Sounds like a hack ;)
> > > > 
> > > > Better get things working without the cfun/current_function_decl globals.
> > > > Wasn't there someone replacing all implicit uses with explicit ones
> > > > for stuff like n_basic_blocks?
> > > 
> > > I was working on this:
> > > http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
> > > though I switched to other tasks I felt were higher priority; sorry.
> > > 
> > > Do you still want me to go ahead and commit the series of changes you
> > > pre-approved there?
> > > 
> > > i.e. the "n_basic_blocks" macro goes away in favor of:
> > >    n_basic_blocks_for_fn (cfun)
> > > as a renaming of the existing n_basic_blocks_for_function macro,
> > > followed up by analogous changes to the other macros.
> > > 
> > > Or should I repost before committing?
> > 
> > I'd say create the n_basic_blocks patch and post it, that gives
> > people a chance to object.  If nobody chimes in I approve it
> > and pre-approve the rest ;)
> > 
> > Using n_basic_blocks_for_fn (cfun) might feel backwards if
> > eventually we'd want to C++-ify struct function and make
> > n_basic_blocks a member function which would make it
> > cfun->n_basic_blocks () instead.  Ok, I think that will get
> > us into C++ bikeshedding again ;)
> 
> [I can't face another C vs C++ discussion right now :)]
> 
> Thanks.  Attached is such a patch, eliminating the:
>   n_basic_blocks
> macro in favor of
>   n_basic_blocks_for_fn (cfun)
> 
> Successfully bootstrapped on x86_64-unknown-linux-gnu, and successfully
> compiled stage1 on spu-unknown-elf and s390-linux-gnu (given that those
> config files are affected).
> 
> Given the conditional pre-approval above, I'm posting here to give
> people a change to object - otherwise I'll commit, and followup with the
> other macros that implicitly use cfun as per the thread linked to above.

Committed to trunk as r204995; I plan to commit followup patches to
remove the other such macros.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH] Eliminate n_basic_blocks macro (was Re: [PATCH] Avoid some unnecessary set_cfun calls)
  2013-11-19  5:27           ` David Malcolm
@ 2013-11-19  9:19             ` Richard Biener
  2013-11-19 17:29               ` Committed: removal of n_edges macro David Malcolm
  0 siblings, 1 reply; 42+ messages in thread
From: Richard Biener @ 2013-11-19  9:19 UTC (permalink / raw)
  To: David Malcolm; +Cc: Martin Jambor, Jakub Jelinek, gcc-patches

On Mon, 18 Nov 2013, David Malcolm wrote:

> On Fri, 2013-11-15 at 20:38 -0500, David Malcolm wrote:
> > On Wed, 2013-11-13 at 14:44 +0100, Richard Biener wrote:
> > > On Wed, 13 Nov 2013, David Malcolm wrote:
> > > 
> > > > On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
> > > > > On Wed, 13 Nov 2013, Martin Jambor wrote:
> > > > > 
> > > > > > Hi,
> > > > > > 
> > > > > > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > > > > > Hi!
> > > > > > > 
> > > > > > > void f1 (void) {}
> > > > > > > __attribute__((target ("avx"))) void f2 (void) {}
> > > > > > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > > > > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > > > > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > > > > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > > > > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > > > > > expensive and there are hundreds of such calls.
> > > > > > > The following patch is just a quick change to avoid some of them:
> > > > > > > execute_function_todo starts with:
> > > > > > >   unsigned int flags = (size_t)data;
> > > > > > >   flags &= ~cfun->last_verified;
> > > > > > >   if (!flags)
> > > > > > >     return;
> > > > > > > and if flags is initially zero, it does nothing.
> > > > > > > Similarly, execute_function_dump has the whole body surrounded by
> > > > > > >   if (dump_file && current_function_decl)
> > > > > > > and thus if dump_file is NULL, there is nothing to do.
> > > > > > > So IMHO in neither case (which happens pretty frequently) we need to
> > > > > > > set_cfun to every function during IPA.
> > > > > > > 
> > > > > > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > > > > > computed by it is used only during RTL optimization passes (haven't verified
> > > > > > > it yet), then supposedly we could just remember using some target hook
> > > > > > > what the last state was when we did ira_init last time, and call ira_init
> > > > > > > again at the start of expansion or so if it is different from the
> > > > > > > last time.
> > > > > > 
> > > > > > I was wondering whether the expensive parts of set_cfun could only be
> > > > > > run in pass_all_optimizations (and the -Og equivalent) but not when
> > > > > > changing functions in early and IPA passes.
> > > > > 
> > > > > Sounds like a hack ;)
> > > > > 
> > > > > Better get things working without the cfun/current_function_decl globals.
> > > > > Wasn't there someone replacing all implicit uses with explicit ones
> > > > > for stuff like n_basic_blocks?
> > > > 
> > > > I was working on this:
> > > > http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
> > > > though I switched to other tasks I felt were higher priority; sorry.
> > > > 
> > > > Do you still want me to go ahead and commit the series of changes you
> > > > pre-approved there?
> > > > 
> > > > i.e. the "n_basic_blocks" macro goes away in favor of:
> > > >    n_basic_blocks_for_fn (cfun)
> > > > as a renaming of the existing n_basic_blocks_for_function macro,
> > > > followed up by analogous changes to the other macros.
> > > > 
> > > > Or should I repost before committing?
> > > 
> > > I'd say create the n_basic_blocks patch and post it, that gives
> > > people a chance to object.  If nobody chimes in I approve it
> > > and pre-approve the rest ;)
> > > 
> > > Using n_basic_blocks_for_fn (cfun) might feel backwards if
> > > eventually we'd want to C++-ify struct function and make
> > > n_basic_blocks a member function which would make it
> > > cfun->n_basic_blocks () instead.  Ok, I think that will get
> > > us into C++ bikeshedding again ;)
> > 
> > [I can't face another C vs C++ discussion right now :)]
> > 
> > Thanks.  Attached is such a patch, eliminating the:
> >   n_basic_blocks
> > macro in favor of
> >   n_basic_blocks_for_fn (cfun)
> > 
> > Successfully bootstrapped on x86_64-unknown-linux-gnu, and successfully
> > compiled stage1 on spu-unknown-elf and s390-linux-gnu (given that those
> > config files are affected).
> > 
> > Given the conditional pre-approval above, I'm posting here to give
> > people a change to object - otherwise I'll commit, and followup with the
> > other macros that implicitly use cfun as per the thread linked to above.
> 
> Committed to trunk as r204995; I plan to commit followup patches to
> remove the other such macros.

Thanks!

Richard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Committed: removal of n_edges macro
  2013-11-19  9:19             ` Richard Biener
@ 2013-11-19 17:29               ` David Malcolm
  2013-11-19 17:33                 ` Richard Biener
  2013-11-20  1:12                 ` Committed: removal of ENTRY_BLOCK_PTR and EXIT_BLOCK_PTR macros David Malcolm
  0 siblings, 2 replies; 42+ messages in thread
From: David Malcolm @ 2013-11-19 17:29 UTC (permalink / raw)
  To: Richard Biener; +Cc: Martin Jambor, Jakub Jelinek, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 4893 bytes --]

On Tue, 2013-11-19 at 09:49 +0100, Richard Biener wrote:
> On Mon, 18 Nov 2013, David Malcolm wrote:
> 
> > On Fri, 2013-11-15 at 20:38 -0500, David Malcolm wrote:
> > > On Wed, 2013-11-13 at 14:44 +0100, Richard Biener wrote:
> > > > On Wed, 13 Nov 2013, David Malcolm wrote:
> > > > 
> > > > > On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
> > > > > > On Wed, 13 Nov 2013, Martin Jambor wrote:
> > > > > > 
> > > > > > > Hi,
> > > > > > > 
> > > > > > > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > > > > > > Hi!
> > > > > > > > 
> > > > > > > > void f1 (void) {}
> > > > > > > > __attribute__((target ("avx"))) void f2 (void) {}
> > > > > > > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > > > > > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > > > > > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > > > > > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > > > > > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > > > > > > expensive and there are hundreds of such calls.
> > > > > > > > The following patch is just a quick change to avoid some of them:
> > > > > > > > execute_function_todo starts with:
> > > > > > > >   unsigned int flags = (size_t)data;
> > > > > > > >   flags &= ~cfun->last_verified;
> > > > > > > >   if (!flags)
> > > > > > > >     return;
> > > > > > > > and if flags is initially zero, it does nothing.
> > > > > > > > Similarly, execute_function_dump has the whole body surrounded by
> > > > > > > >   if (dump_file && current_function_decl)
> > > > > > > > and thus if dump_file is NULL, there is nothing to do.
> > > > > > > > So IMHO in neither case (which happens pretty frequently) we need to
> > > > > > > > set_cfun to every function during IPA.
> > > > > > > > 
> > > > > > > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > > > > > > computed by it is used only during RTL optimization passes (haven't verified
> > > > > > > > it yet), then supposedly we could just remember using some target hook
> > > > > > > > what the last state was when we did ira_init last time, and call ira_init
> > > > > > > > again at the start of expansion or so if it is different from the
> > > > > > > > last time.
> > > > > > > 
> > > > > > > I was wondering whether the expensive parts of set_cfun could only be
> > > > > > > run in pass_all_optimizations (and the -Og equivalent) but not when
> > > > > > > changing functions in early and IPA passes.
> > > > > > 
> > > > > > Sounds like a hack ;)
> > > > > > 
> > > > > > Better get things working without the cfun/current_function_decl globals.
> > > > > > Wasn't there someone replacing all implicit uses with explicit ones
> > > > > > for stuff like n_basic_blocks?
> > > > > 
> > > > > I was working on this:
> > > > > http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
> > > > > though I switched to other tasks I felt were higher priority; sorry.
> > > > > 
> > > > > Do you still want me to go ahead and commit the series of changes you
> > > > > pre-approved there?
> > > > > 
> > > > > i.e. the "n_basic_blocks" macro goes away in favor of:
> > > > >    n_basic_blocks_for_fn (cfun)
> > > > > as a renaming of the existing n_basic_blocks_for_function macro,
> > > > > followed up by analogous changes to the other macros.
> > > > > 
> > > > > Or should I repost before committing?
> > > > 
> > > > I'd say create the n_basic_blocks patch and post it, that gives
> > > > people a chance to object.  If nobody chimes in I approve it
> > > > and pre-approve the rest ;)
> > > > 
> > > > Using n_basic_blocks_for_fn (cfun) might feel backwards if
> > > > eventually we'd want to C++-ify struct function and make
> > > > n_basic_blocks a member function which would make it
> > > > cfun->n_basic_blocks () instead.  Ok, I think that will get
> > > > us into C++ bikeshedding again ;)
> > > 
> > > [I can't face another C vs C++ discussion right now :)]
> > > 
> > > Thanks.  Attached is such a patch, eliminating the:
> > >   n_basic_blocks
> > > macro in favor of
> > >   n_basic_blocks_for_fn (cfun)
> > > 
> > > Successfully bootstrapped on x86_64-unknown-linux-gnu, and successfully
> > > compiled stage1 on spu-unknown-elf and s390-linux-gnu (given that those
> > > config files are affected).
> > > 
> > > Given the conditional pre-approval above, I'm posting here to give
> > > people a change to object - otherwise I'll commit, and followup with the
> > > other macros that implicitly use cfun as per the thread linked to above.
> > 
> > Committed to trunk as r204995; I plan to commit followup patches to
> > remove the other such macros.
> 
> Thanks!

The following removed the "n_edges" macro.  I committed it to trunk as
r205044 having successfully bootstrapped on x86_64-unknown-linux-gnu.

(should I continue to post these patches as I commit them?)



[-- Attachment #2: 0001-Eliminate-n_edges-macro.patch --]
[-- Type: text/x-patch, Size: 9694 bytes --]

From dc32f3cec5bf56cf071a5a7f7b926c2c26d3fe82 Mon Sep 17 00:00:00 2001
From: David Malcolm <dmalcolm@redhat.com>
Date: Mon, 18 Nov 2013 20:23:36 -0500
Subject: [PATCH 1/2] Eliminate n_edges macro

gcc/

	* basic-block.h (n_edges_for_function): Rename macro to...
	(n_edges_for_fn): ...this.
	(n_edges): Eliminate macro as work towards making uses of
	cfun be explicit.

	* cfg.c (init_flow): Update for renaming of "n_edges_for_function"
	to "n_edges_for_fn".

	* cfg.c (unchecked_make_edge): Remove usage of n_edges macro.
	(clear_edges): Likewise.
	(free_edge): Likewise.
	* cfghooks.c (dump_flow_info): Likewise.
	* cprop.c (is_too_expensive): Likewise.
	* df-core.c (df_worklist_dataflow_doublequeue): Likewise.
	* gcse.c (is_too_expensive): Likewise.
	(prune_insertions_deletions): Likewise.
	* mcf.c (create_fixup_graph): Likewise.
	* sched-rgn.c (haifa_find_rgns): Likewise.
	* tree-cfg.c (gimple_dump_cfg): Likewise.
	* var-tracking.c (variable_tracking_main_1): Likewise.
---
 gcc/basic-block.h  | 3 +--
 gcc/cfg.c          | 8 ++++----
 gcc/cfghooks.c     | 2 +-
 gcc/cprop.c        | 4 ++--
 gcc/df-core.c      | 2 +-
 gcc/gcse.c         | 8 ++++----
 gcc/mcf.c          | 9 ++++++---
 gcc/sched-rgn.c    | 2 +-
 gcc/tree-cfg.c     | 3 ++-
 gcc/var-tracking.c | 2 +-
 10 files changed, 23 insertions(+), 20 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index d247d4f..38391be 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -316,7 +316,7 @@ struct GTY(()) control_flow_graph {
 #define EXIT_BLOCK_PTR_FOR_FUNCTION(FN)	     ((FN)->cfg->x_exit_block_ptr)
 #define basic_block_info_for_function(FN)    ((FN)->cfg->x_basic_block_info)
 #define n_basic_blocks_for_fn(FN)	     ((FN)->cfg->x_n_basic_blocks)
-#define n_edges_for_function(FN)	     ((FN)->cfg->x_n_edges)
+#define n_edges_for_fn(FN)		     ((FN)->cfg->x_n_edges)
 #define last_basic_block_for_function(FN)    ((FN)->cfg->x_last_basic_block)
 #define label_to_block_map_for_function(FN)  ((FN)->cfg->x_label_to_block_map)
 #define profile_status_for_function(FN)	     ((FN)->cfg->x_profile_status)
@@ -330,7 +330,6 @@ struct GTY(()) control_flow_graph {
 #define ENTRY_BLOCK_PTR		(cfun->cfg->x_entry_block_ptr)
 #define EXIT_BLOCK_PTR		(cfun->cfg->x_exit_block_ptr)
 #define basic_block_info	(cfun->cfg->x_basic_block_info)
-#define n_edges			(cfun->cfg->x_n_edges)
 #define last_basic_block	(cfun->cfg->x_last_basic_block)
 #define label_to_block_map	(cfun->cfg->x_label_to_block_map)
 #define profile_status		(cfun->cfg->x_profile_status)
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 10791a7..166ad38 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -69,7 +69,7 @@ init_flow (struct function *the_fun)
 {
   if (!the_fun->cfg)
     the_fun->cfg = ggc_alloc_cleared_control_flow_graph ();
-  n_edges_for_function (the_fun) = 0;
+  n_edges_for_fn (the_fun) = 0;
   ENTRY_BLOCK_PTR_FOR_FUNCTION (the_fun)
     = ggc_alloc_cleared_basic_block_def ();
   ENTRY_BLOCK_PTR_FOR_FUNCTION (the_fun)->index = ENTRY_BLOCK;
@@ -88,7 +88,7 @@ init_flow (struct function *the_fun)
 static void
 free_edge (edge e)
 {
-  n_edges--;
+  n_edges_for_fn (cfun)--;
   ggc_free (e);
 }
 
@@ -114,7 +114,7 @@ clear_edges (void)
   vec_safe_truncate (EXIT_BLOCK_PTR->preds, 0);
   vec_safe_truncate (ENTRY_BLOCK_PTR->succs, 0);
 
-  gcc_assert (!n_edges);
+  gcc_assert (!n_edges_for_fn (cfun));
 }
 \f
 /* Allocate memory for basic_block.  */
@@ -262,7 +262,7 @@ unchecked_make_edge (basic_block src, basic_block dst, int flags)
 {
   edge e;
   e = ggc_alloc_cleared_edge_def ();
-  n_edges++;
+  n_edges_for_fn (cfun)++;
 
   e->src = src;
   e->dest = dst;
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index 3016c54..20b90bf 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -324,7 +324,7 @@ dump_flow_info (FILE *file, int flags)
   basic_block bb;
 
   fprintf (file, "\n%d basic blocks, %d edges.\n", n_basic_blocks_for_fn (cfun),
-	   n_edges);
+	   n_edges_for_fn (cfun));
   FOR_ALL_BB (bb)
     dump_bb (file, bb, 0, flags);
 
diff --git a/gcc/cprop.c b/gcc/cprop.c
index 78cfeba..35a44f2 100644
--- a/gcc/cprop.c
+++ b/gcc/cprop.c
@@ -1729,12 +1729,12 @@ is_too_expensive (const char *pass)
      which have a couple switch statements.  Rather than simply
      threshold the number of blocks, uses something with a more
      graceful degradation.  */
-  if (n_edges > 20000 + n_basic_blocks_for_fn (cfun) * 4)
+  if (n_edges_for_fn (cfun) > 20000 + n_basic_blocks_for_fn (cfun) * 4)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d edges/basic block",
 	       pass, n_basic_blocks_for_fn (cfun),
-	       n_edges / n_basic_blocks_for_fn (cfun));
+	       n_edges_for_fn (cfun) / n_basic_blocks_for_fn (cfun));
 
       return true;
     }
diff --git a/gcc/df-core.c b/gcc/df-core.c
index 20d6c4e..37876af 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -1097,7 +1097,7 @@ df_worklist_dataflow_doublequeue (struct dataflow *dataflow,
     fprintf (dump_file, "df_worklist_dataflow_doublequeue:"
 	     "n_basic_blocks %d n_edges %d"
 	     " count %d (%5.2g)\n",
-	     n_basic_blocks_for_fn (cfun), n_edges,
+	     n_basic_blocks_for_fn (cfun), n_edges_for_fn (cfun),
 	     dcount, dcount / (float)n_basic_blocks_for_fn (cfun));
 }
 
diff --git a/gcc/gcse.c b/gcc/gcse.c
index 5ed99bd..a37ac6b 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -1964,7 +1964,7 @@ prune_insertions_deletions (int n_elems)
 
   /* Iterate over the edges counting the number of times each expression
      needs to be inserted.  */
-  for (i = 0; i < (unsigned) n_edges; i++)
+  for (i = 0; i < (unsigned) n_edges_for_fn (cfun); i++)
     {
       EXECUTE_IF_SET_IN_BITMAP (pre_insert_map[i], 0, j, sbi)
 	insertions[j]++;
@@ -1990,7 +1990,7 @@ prune_insertions_deletions (int n_elems)
   /* Now prune PRE_INSERT_MAP and PRE_DELETE_MAP based on PRUNE_EXPRS.  */
   EXECUTE_IF_SET_IN_BITMAP (prune_exprs, 0, j, sbi)
     {
-      for (i = 0; i < (unsigned) n_edges; i++)
+      for (i = 0; i < (unsigned) n_edges_for_fn (cfun); i++)
 	bitmap_clear_bit (pre_insert_map[i], j);
 
       for (i = 0; i < (unsigned) last_basic_block; i++)
@@ -4069,12 +4069,12 @@ is_too_expensive (const char *pass)
      which have a couple switch statements.  Rather than simply
      threshold the number of blocks, uses something with a more
      graceful degradation.  */
-  if (n_edges > 20000 + n_basic_blocks_for_fn (cfun) * 4)
+  if (n_edges_for_fn (cfun) > 20000 + n_basic_blocks_for_fn (cfun) * 4)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d edges/basic block",
 	       pass, n_basic_blocks_for_fn (cfun),
-	       n_edges / n_basic_blocks_for_fn (cfun));
+	       n_edges_for_fn (cfun) / n_basic_blocks_for_fn (cfun));
 
       return true;
     }
diff --git a/gcc/mcf.c b/gcc/mcf.c
index e0e40d8..45adda3 100644
--- a/gcc/mcf.c
+++ b/gcc/mcf.c
@@ -472,11 +472,13 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
 
   /* Each basic_block will be split into 2 during vertex transformation.  */
   int fnum_vertices_after_transform =  2 * n_basic_blocks_for_fn (cfun);
-  int fnum_edges_after_transform = n_edges + n_basic_blocks_for_fn (cfun);
+  int fnum_edges_after_transform =
+    n_edges_for_fn (cfun) + n_basic_blocks_for_fn (cfun);
 
   /* Count the new SOURCE and EXIT vertices to be added.  */
   int fmax_num_vertices =
-    fnum_vertices_after_transform + n_edges + n_basic_blocks_for_fn (cfun) + 2;
+    (fnum_vertices_after_transform + n_edges_for_fn (cfun)
+     + n_basic_blocks_for_fn (cfun) + 2);
 
   /* In create_fixup_graph: Each basic block and edge can be split into 3
      edges. Number of balance edges = n_basic_blocks. So after
@@ -486,7 +488,8 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
      max_edges = 2 * (4 * n_basic_blocks + 3 * n_edges)
      = 8 * n_basic_blocks + 6 * n_edges
      < 8 * n_basic_blocks + 8 * n_edges.  */
-  int fmax_num_edges = 8 * (n_basic_blocks_for_fn (cfun) + n_edges);
+  int fmax_num_edges = 8 * (n_basic_blocks_for_fn (cfun) +
+			    n_edges_for_fn (cfun));
 
   /* Initial num of vertices in the fixup graph.  */
   fixup_graph->num_vertices = n_basic_blocks_for_fn (cfun);
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index 20c29c5..87042dd 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -643,7 +643,7 @@ haifa_find_rgns (void)
   /* Allocate and initialize variables for the first traversal.  */
   max_hdr = XNEWVEC (int, last_basic_block);
   dfs_nr = XCNEWVEC (int, last_basic_block);
-  stack = XNEWVEC (edge_iterator, n_edges);
+  stack = XNEWVEC (edge_iterator, n_edges_for_fn (cfun));
 
   inner = sbitmap_alloc (last_basic_block);
   bitmap_ones (inner);
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index c30b113..d2af39e 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -2106,7 +2106,8 @@ gimple_dump_cfg (FILE *file, int flags)
     {
       dump_function_header (file, current_function_decl, flags);
       fprintf (file, ";; \n%d basic blocks, %d edges, last basic block %d.\n\n",
-	       n_basic_blocks_for_fn (cfun), n_edges, last_basic_block);
+	       n_basic_blocks_for_fn (cfun), n_edges_for_fn (cfun),
+	       last_basic_block);
 
       brief_dump_cfg (file, flags | TDF_COMMENT);
       fprintf (file, "\n");
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index a569d46..cfda63a 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -10161,7 +10161,7 @@ variable_tracking_main_1 (void)
     }
 
   if (n_basic_blocks_for_fn (cfun) > 500 &&
-      n_edges / n_basic_blocks_for_fn (cfun) >= 20)
+      n_edges_for_fn (cfun) / n_basic_blocks_for_fn (cfun) >= 20)
     {
       vt_debug_insns_local (true);
       return 0;
-- 
1.7.11.7


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: Committed: removal of n_edges macro
  2013-11-19 17:29               ` Committed: removal of n_edges macro David Malcolm
@ 2013-11-19 17:33                 ` Richard Biener
  2013-11-20  1:12                 ` Committed: removal of ENTRY_BLOCK_PTR and EXIT_BLOCK_PTR macros David Malcolm
  1 sibling, 0 replies; 42+ messages in thread
From: Richard Biener @ 2013-11-19 17:33 UTC (permalink / raw)
  To: David Malcolm; +Cc: Martin Jambor, Jakub Jelinek, gcc-patches

David Malcolm <dmalcolm@redhat.com> wrote:
>On Tue, 2013-11-19 at 09:49 +0100, Richard Biener wrote:
>> On Mon, 18 Nov 2013, David Malcolm wrote:
>> 
>> > On Fri, 2013-11-15 at 20:38 -0500, David Malcolm wrote:
>> > > On Wed, 2013-11-13 at 14:44 +0100, Richard Biener wrote:
>> > > > On Wed, 13 Nov 2013, David Malcolm wrote:
>> > > > 
>> > > > > On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
>> > > > > > On Wed, 13 Nov 2013, Martin Jambor wrote:
>> > > > > > 
>> > > > > > > Hi,
>> > > > > > > 
>> > > > > > > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek
>wrote:
>> > > > > > > > Hi!
>> > > > > > > > 
>> > > > > > > > void f1 (void) {}
>> > > > > > > > __attribute__((target ("avx"))) void f2 (void) {}
>> > > > > > > > __attribute__((target ("avx2"))) void f3 (void) {}
>> > > > > > > > __attribute__((target ("sse3"))) void f4 (void) {}
>> > > > > > > > __attribute__((target ("ssse3"))) void f5 (void) {}
>> > > > > > > > __attribute__((target ("sse4"))) void f6 (void) {}
>> > > > > > > > takes about 3 seconds to compile at -O2, because
>set_cfun is terribly
>> > > > > > > > expensive and there are hundreds of such calls.
>> > > > > > > > The following patch is just a quick change to avoid
>some of them:
>> > > > > > > > execute_function_todo starts with:
>> > > > > > > >   unsigned int flags = (size_t)data;
>> > > > > > > >   flags &= ~cfun->last_verified;
>> > > > > > > >   if (!flags)
>> > > > > > > >     return;
>> > > > > > > > and if flags is initially zero, it does nothing.
>> > > > > > > > Similarly, execute_function_dump has the whole body
>surrounded by
>> > > > > > > >   if (dump_file && current_function_decl)
>> > > > > > > > and thus if dump_file is NULL, there is nothing to do.
>> > > > > > > > So IMHO in neither case (which happens pretty
>frequently) we need to
>> > > > > > > > set_cfun to every function during IPA.
>> > > > > > > > 
>> > > > > > > > Also, I wonder if we couldn't defer the expensive
>ira_init, if the info
>> > > > > > > > computed by it is used only during RTL optimization
>passes (haven't verified
>> > > > > > > > it yet), then supposedly we could just remember using
>some target hook
>> > > > > > > > what the last state was when we did ira_init last time,
>and call ira_init
>> > > > > > > > again at the start of expansion or so if it is
>different from the
>> > > > > > > > last time.
>> > > > > > > 
>> > > > > > > I was wondering whether the expensive parts of set_cfun
>could only be
>> > > > > > > run in pass_all_optimizations (and the -Og equivalent)
>but not when
>> > > > > > > changing functions in early and IPA passes.
>> > > > > > 
>> > > > > > Sounds like a hack ;)
>> > > > > > 
>> > > > > > Better get things working without the
>cfun/current_function_decl globals.
>> > > > > > Wasn't there someone replacing all implicit uses with
>explicit ones
>> > > > > > for stuff like n_basic_blocks?
>> > > > > 
>> > > > > I was working on this:
>> > > > > http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
>> > > > > though I switched to other tasks I felt were higher priority;
>sorry.
>> > > > > 
>> > > > > Do you still want me to go ahead and commit the series of
>changes you
>> > > > > pre-approved there?
>> > > > > 
>> > > > > i.e. the "n_basic_blocks" macro goes away in favor of:
>> > > > >    n_basic_blocks_for_fn (cfun)
>> > > > > as a renaming of the existing n_basic_blocks_for_function
>macro,
>> > > > > followed up by analogous changes to the other macros.
>> > > > > 
>> > > > > Or should I repost before committing?
>> > > > 
>> > > > I'd say create the n_basic_blocks patch and post it, that gives
>> > > > people a chance to object.  If nobody chimes in I approve it
>> > > > and pre-approve the rest ;)
>> > > > 
>> > > > Using n_basic_blocks_for_fn (cfun) might feel backwards if
>> > > > eventually we'd want to C++-ify struct function and make
>> > > > n_basic_blocks a member function which would make it
>> > > > cfun->n_basic_blocks () instead.  Ok, I think that will get
>> > > > us into C++ bikeshedding again ;)
>> > > 
>> > > [I can't face another C vs C++ discussion right now :)]
>> > > 
>> > > Thanks.  Attached is such a patch, eliminating the:
>> > >   n_basic_blocks
>> > > macro in favor of
>> > >   n_basic_blocks_for_fn (cfun)
>> > > 
>> > > Successfully bootstrapped on x86_64-unknown-linux-gnu, and
>successfully
>> > > compiled stage1 on spu-unknown-elf and s390-linux-gnu (given that
>those
>> > > config files are affected).
>> > > 
>> > > Given the conditional pre-approval above, I'm posting here to
>give
>> > > people a change to object - otherwise I'll commit, and followup
>with the
>> > > other macros that implicitly use cfun as per the thread linked to
>above.
>> > 
>> > Committed to trunk as r204995; I plan to commit followup patches to
>> > remove the other such macros.
>> 
>> Thanks!
>
>The following removed the "n_edges" macro.  I committed it to trunk as
>r205044 having successfully bootstrapped on x86_64-unknown-linux-gnu.
>
>(should I continue to post these patches as I commit them?)

Yes.

Thanks,
Richard.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Committed: removal of ENTRY_BLOCK_PTR and EXIT_BLOCK_PTR macros
  2013-11-19 17:29               ` Committed: removal of n_edges macro David Malcolm
  2013-11-19 17:33                 ` Richard Biener
@ 2013-11-20  1:12                 ` David Malcolm
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
  1 sibling, 1 reply; 42+ messages in thread
From: David Malcolm @ 2013-11-20  1:12 UTC (permalink / raw)
  To: Richard Biener; +Cc: Martin Jambor, Jakub Jelinek, gcc-patches

[-- Attachment #1: Type: text/plain, Size: 5556 bytes --]

On Tue, 2013-11-19 at 11:52 -0500, David Malcolm wrote:
> On Tue, 2013-11-19 at 09:49 +0100, Richard Biener wrote:
> > On Mon, 18 Nov 2013, David Malcolm wrote:
> > 
> > > On Fri, 2013-11-15 at 20:38 -0500, David Malcolm wrote:
> > > > On Wed, 2013-11-13 at 14:44 +0100, Richard Biener wrote:
> > > > > On Wed, 13 Nov 2013, David Malcolm wrote:
> > > > > 
> > > > > > On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
> > > > > > > On Wed, 13 Nov 2013, Martin Jambor wrote:
> > > > > > > 
> > > > > > > > Hi,
> > > > > > > > 
> > > > > > > > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > > > > > > > Hi!
> > > > > > > > > 
> > > > > > > > > void f1 (void) {}
> > > > > > > > > __attribute__((target ("avx"))) void f2 (void) {}
> > > > > > > > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > > > > > > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > > > > > > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > > > > > > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > > > > > > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > > > > > > > expensive and there are hundreds of such calls.
> > > > > > > > > The following patch is just a quick change to avoid some of them:
> > > > > > > > > execute_function_todo starts with:
> > > > > > > > >   unsigned int flags = (size_t)data;
> > > > > > > > >   flags &= ~cfun->last_verified;
> > > > > > > > >   if (!flags)
> > > > > > > > >     return;
> > > > > > > > > and if flags is initially zero, it does nothing.
> > > > > > > > > Similarly, execute_function_dump has the whole body surrounded by
> > > > > > > > >   if (dump_file && current_function_decl)
> > > > > > > > > and thus if dump_file is NULL, there is nothing to do.
> > > > > > > > > So IMHO in neither case (which happens pretty frequently) we need to
> > > > > > > > > set_cfun to every function during IPA.
> > > > > > > > > 
> > > > > > > > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > > > > > > > computed by it is used only during RTL optimization passes (haven't verified
> > > > > > > > > it yet), then supposedly we could just remember using some target hook
> > > > > > > > > what the last state was when we did ira_init last time, and call ira_init
> > > > > > > > > again at the start of expansion or so if it is different from the
> > > > > > > > > last time.
> > > > > > > > 
> > > > > > > > I was wondering whether the expensive parts of set_cfun could only be
> > > > > > > > run in pass_all_optimizations (and the -Og equivalent) but not when
> > > > > > > > changing functions in early and IPA passes.
> > > > > > > 
> > > > > > > Sounds like a hack ;)
> > > > > > > 
> > > > > > > Better get things working without the cfun/current_function_decl globals.
> > > > > > > Wasn't there someone replacing all implicit uses with explicit ones
> > > > > > > for stuff like n_basic_blocks?
> > > > > > 
> > > > > > I was working on this:
> > > > > > http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
> > > > > > though I switched to other tasks I felt were higher priority; sorry.
> > > > > > 
> > > > > > Do you still want me to go ahead and commit the series of changes you
> > > > > > pre-approved there?
> > > > > > 
> > > > > > i.e. the "n_basic_blocks" macro goes away in favor of:
> > > > > >    n_basic_blocks_for_fn (cfun)
> > > > > > as a renaming of the existing n_basic_blocks_for_function macro,
> > > > > > followed up by analogous changes to the other macros.
> > > > > > 
> > > > > > Or should I repost before committing?
> > > > > 
> > > > > I'd say create the n_basic_blocks patch and post it, that gives
> > > > > people a chance to object.  If nobody chimes in I approve it
> > > > > and pre-approve the rest ;)
> > > > > 
> > > > > Using n_basic_blocks_for_fn (cfun) might feel backwards if
> > > > > eventually we'd want to C++-ify struct function and make
> > > > > n_basic_blocks a member function which would make it
> > > > > cfun->n_basic_blocks () instead.  Ok, I think that will get
> > > > > us into C++ bikeshedding again ;)
> > > > 
> > > > [I can't face another C vs C++ discussion right now :)]
> > > > 
> > > > Thanks.  Attached is such a patch, eliminating the:
> > > >   n_basic_blocks
> > > > macro in favor of
> > > >   n_basic_blocks_for_fn (cfun)
> > > > 
> > > > Successfully bootstrapped on x86_64-unknown-linux-gnu, and successfully
> > > > compiled stage1 on spu-unknown-elf and s390-linux-gnu (given that those
> > > > config files are affected).
> > > > 
> > > > Given the conditional pre-approval above, I'm posting here to give
> > > > people a change to object - otherwise I'll commit, and followup with the
> > > > other macros that implicitly use cfun as per the thread linked to above.
> > > 
> > > Committed to trunk as r204995; I plan to commit followup patches to
> > > remove the other such macros.
> > 
> > Thanks!
> 
> The following removed the "n_edges" macro.  I committed it to trunk as
> r205044 having successfully bootstrapped on x86_64-unknown-linux-gnu.
> 
> (should I continue to post these patches as I commit them?)

This one removed the ENTRY_BLOCK_PTR and EXIT_BLOCK_PTR macros.

I committed it to trunk as r205055 having bootstrapped&regtested on
x86_64-unknown-linux-gnu, and also verified stage1 compile for the
following targets (the other ones touched by the patch):
  bfin-unknown-none
  nds32le-unknown-elf
  arm-unknown-eabi
  rs6000-ibm-aix6.0
  frv-unknown-linux-gnu
  alpha-linux-gnu
  ia64-unknown-linux-gnu


[-- Attachment #2: remove-ENTRY_BLOCK_PTR-and-EXIT_BLOCK_PTR.patch --]
[-- Type: text/x-patch, Size: 278934 bytes --]

commit a5e3d3f995732f63e7224b9086d121ac8ff1473f
Author: David Malcolm <dmalcolm@redhat.com>
Date:   Mon Nov 18 21:16:04 2013 -0500

    Eliminate ENTRY_BLOCK_PTR and EXIT_BLOCK_PTR macros
    
    gcc/
    
    	* basic-block.h (ENTRY_BLOCK_PTR_FOR_FUNCTION): Rename macro to...
    	(EXIT_BLOCK_PTR_FOR_FUNCTION): ...this.
    	(ENTRY_BLOCK_PTR_FOR_FN): Renamed macro to...
    	(EXIT_BLOCK_PTR_FOR_FN): ...this.
    	(ENTRY_BLOCK_PTR): Eliminate macro as work towards making uses of
            cfun be explicit.
    	(EXIT_BLOCK_PTR): Likewise.
    	(FOR_ALL_BB): Rework for now to eliminate use of "ENTRY_BLOCK_PTR".
    	(FOR_ALL_BB_FN): Update for renaming of
    	"ENTRY_BLOCK_PTR_FOR_FUNCTION" to "ENTRY_BLOCK_PTR_FOR_FN".
    
    	* cfg.c (init_flow): Likewise.
    	(check_bb_profile): Likewise.
    	* cfganal.c (pre_and_rev_post_order_compute_fn): Likewise.
    	* cfgcleanup.c (walk_to_nondebug_insn): Likewise.
    	* cfghooks.c (account_profile_record): Likewise.
    	* cfgloop.c (init_loops_structure): Likewise.
    	* cgraphbuild.c (record_eh_tables): Likewise.
    	(compute_call_stmt_bb_frequency): Likewise.
    	* ipa-inline-analysis.c (compute_bb_predicates): Likewise.
    	* lto-streamer-in.c (input_cfg): Likewise.
    	* predict.c (maybe_hot_frequency_p): Likewise.
    	* tree-cfg.c (init_empty_tree_cfg_for_function): Likewise.
    	* tree-inline.c (initialize_cfun): Likewise.
    	(copy_cfg_body): Likewise.
    	(copy_body): Likewise.
    	(tree_function_versioning): Likewise.
    
    	* bb-reorder.c (add_labels_and_missing_jumps): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(duplicate_computed_gotos): Remove usage of EXIT_BLOCK_PTR macro.
    	(find_rarely_executed_basic_blocks_and_crossing_edges): Remove uses of
    	macros: ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(connect_traces): Likewise.
    	(rest_of_handle_reorder_blocks): Remove usage of EXIT_BLOCK_PTR macro.
    	(bb_to_key): Remove usage of ENTRY_BLOCK_PTR macro.
    	(fix_crossing_conditional_branches): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	(find_traces_1_round): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(fix_up_fall_thru_edges): Remove usage of EXIT_BLOCK_PTR macro.
    	(find_traces): Remove usage of ENTRY_BLOCK_PTR macro.
    	(fix_up_crossing_landing_pad): Remove usage of EXIT_BLOCK_PTR macro.
    	(rotate_loop): Likewise.
    	* bt-load.c (migrate_btr_def): Remove usage of ENTRY_BLOCK_PTR macro.
    	* cfg.c (clear_aux_for_edges): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(alloc_aux_for_edges): Likewise.
    	(clear_bb_flags): Remove usage of ENTRY_BLOCK_PTR macro.
    	(cached_make_edge): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(compact_blocks): Likewise.
    	(clear_edges): Likewise.
    	* cfganal.c (single_pred_before_succ_order): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(bitmap_union_of_succs): Remove usage of EXIT_BLOCK_PTR macro.
    	(bitmap_union_of_preds): Remove usage of ENTRY_BLOCK_PTR macro.
    	(bitmap_intersection_of_succs): Remove usage of EXIT_BLOCK_PTR macro.
    	(bitmap_intersection_of_preds): Remove usage of ENTRY_BLOCK_PTR macro.
    	(inverted_post_order_compute): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(compute_dominance_frontiers_1): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(post_order_compute): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(connect_infinite_loops_to_exit): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	(remove_fake_edges): Remove usage of ENTRY_BLOCK_PTR macro.
    	(add_noreturn_fake_exit_edges): Remove usage of EXIT_BLOCK_PTR macro.
    	(find_pdom): Remove uses of macros: ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(remove_fake_exit_edges): Remove usage of EXIT_BLOCK_PTR macro.
    	(verify_edge_list): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(print_edge_list): Likewise.
    	(create_edge_list): Likewise.
    	(find_unreachable_blocks): Remove usage of ENTRY_BLOCK_PTR macro.
    	(mark_dfs_back_edges): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	* cfgbuild.c (find_bb_boundaries): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(find_many_sub_basic_blocks): Remove usage of EXIT_BLOCK_PTR macro.
    	(make_edges): Remove uses of macros: ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	* cfgcleanup.c (delete_unreachable_blocks): Likewise.
    	(try_optimize_cfg): Likewise.
    	(try_head_merge_bb): Remove usage of EXIT_BLOCK_PTR macro.
    	(try_crossjump_to_edge): Remove usage of ENTRY_BLOCK_PTR macro.
    	(try_crossjump_bb): Remove usage of EXIT_BLOCK_PTR macro.
    	(merge_blocks_move): Remove usage of ENTRY_BLOCK_PTR macro.
    	(outgoing_edges_match): Remove usage of EXIT_BLOCK_PTR macro.
    	(try_forward_edges): Likewise.
    	(try_simplify_condjump): Likewise.
    	* cfgexpand.c (gimple_expand_cfg): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(construct_exit_block): Remove usage of EXIT_BLOCK_PTR macro.
    	(construct_init_block): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(expand_gimple_basic_block): Remove usage of EXIT_BLOCK_PTR macro.
    	(expand_gimple_tailcall): Likewise.
    	* cfghooks.c (can_duplicate_block_p): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(tidy_fallthru_edges): Likewise.
    	(verify_flow_info): Likewise.
    	* cfgloop.c (flow_bb_inside_loop_p): Likewise.
    	(num_loop_branches): Remove usage of EXIT_BLOCK_PTR macro.
    	(disambiguate_multiple_latches): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(get_loop_exit_edges): Remove usage of EXIT_BLOCK_PTR macro.
    	(bb_loop_header_p): Remove usage of ENTRY_BLOCK_PTR macro.
    	(get_loop_body_in_bfs_order): Remove usage of EXIT_BLOCK_PTR macro.
    	(get_loop_body_in_dom_order): Likewise.
    	(get_loop_body): Likewise.
    	* cfgloopanal.c (mark_irreducible_loops): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	* cfgloopmanip.c (create_preheader): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(remove_path): Remove usage of EXIT_BLOCK_PTR macro.
    	(fix_bb_placement): Likewise.
    	* cfgrtl.c (rtl_block_empty_p): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(rtl_can_remove_branch_p): Remove usage of EXIT_BLOCK_PTR macro.
    	(cfg_layout_split_edge): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(rtl_flow_call_edges_add): Remove usage of EXIT_BLOCK_PTR macro.
    	(cfg_layout_can_merge_blocks_p): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(cfg_layout_redirect_edge_and_branch): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(fixup_fallthru_exit_predecessor): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(fixup_reorder_chain): Likewise.
    	(relink_block_chain): Likewise.
    	(cfg_layout_delete_block): Remove usage of EXIT_BLOCK_PTR macro.
    	(rtl_verify_bb_layout): Remove usage of ENTRY_BLOCK_PTR macro.
    	(cfg_layout_duplicate_bb): Remove usage of EXIT_BLOCK_PTR macro.
    	(force_one_exit_fallthru): Likewise.
    	(rtl_verify_fallthru): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(rtl_verify_edges): Likewise.
    	(commit_edge_insertions): Likewise.
    	(commit_one_edge_insertion): Likewise.
    	(rtl_split_edge): Likewise.
    	(force_nonfallthru_and_redirect): Likewise.
    	(outof_cfg_layout_mode): Remove usage of EXIT_BLOCK_PTR macro.
    	(skip_insns_after_block): Likewise.
    	(fixup_partition_crossing): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(purge_dead_edges): Remove usage of EXIT_BLOCK_PTR macro.
    	(rtl_can_merge_blocks): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(contains_no_active_insn_p): Likewise.
    	(emit_insn_at_entry): Remove usage of ENTRY_BLOCK_PTR macro.
    	(entry_of_function): Likewise.
    	(last_bb_in_partition): Remove usage of EXIT_BLOCK_PTR macro.
    	(fixup_new_cold_bb): Likewise.
    	(patch_jump_insn): Likewise.
    	(try_redirect_by_replacing_jump): Likewise.
    	(block_label): Likewise.
    	(could_fall_through): Likewise.
    	(can_fallthru): Likewise.
    	* cgraphbuild.c (cgraph_rebuild_references): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(rebuild_cgraph_edges): Likewise.
    	* cgraphunit.c (init_lowered_empty_function): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(expand_thunk): Remove usage of EXIT_BLOCK_PTR macro.
    	* combine.c (get_last_value): Remove usage of ENTRY_BLOCK_PTR macro.
    	(distribute_links): Remove usage of EXIT_BLOCK_PTR macro.
    	(get_last_value_validate): Remove usage of ENTRY_BLOCK_PTR macro.
    	(try_combine): Remove usage of EXIT_BLOCK_PTR macro.
    	(reg_num_sign_bit_copies_for_combine): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(reg_nonzero_bits_for_combine): Likewise.
    	(set_nonzero_bits_and_sign_copies): Likewise.
    	(combine_instructions): Likewise.
    	* cprop.c (one_cprop_pass): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(bypass_conditional_jumps): Likewise.
    	(bypass_block): Remove usage of EXIT_BLOCK_PTR macro.
    	(find_implicit_sets): Likewise.
    	(cprop_jump): Likewise.
    	* cse.c (cse_cc_succs): Likewise.
    	(cse_find_path): Likewise.
    	* df-problems.c (df_lr_confluence_0): Likewise.
    	* df-scan.c (df_entry_block_defs_collect): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(df_exit_block_uses_collect): Remove usage of EXIT_BLOCK_PTR macro.
    	* dominance.c (iterate_fix_dominators): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(calc_idoms): Remove uses of macros: ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(determine_dominators_for_sons): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(calc_dfs_tree): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(prune_bbs_to_update_dominators): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(calc_dfs_tree_nonrec): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	* domwalk.c (cmp_bb_postorder): Likewise.
    	* dse.c (dse_step1): Remove usage of EXIT_BLOCK_PTR macro.
    	* except.c (finish_eh_generation): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(sjlj_emit_function_enter): Likewise.
    	* final.c (compute_alignments): Likewise.
    	* function.c (thread_prologue_and_epilogue_insns): Remove uses of
    	macros: ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(reposition_prologue_and_epilogue_notes): Remove usage of
    	EXIT_BLOCK_PTR macro.
    	(convert_jumps_to_returns): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(regno_clobbered_at_setjmp): Remove usage of ENTRY_BLOCK_PTR macro.
    	(next_block_for_reg): Remove usage of EXIT_BLOCK_PTR macro.
    	* gcse.c (hoist_code): Remove usage of ENTRY_BLOCK_PTR macro.
    	(update_bb_reg_pressure): Remove usage of EXIT_BLOCK_PTR macro.
    	(compute_code_hoist_vbeinout): Likewise.
    	(should_hoist_expr_to_dom): Remove usage of ENTRY_BLOCK_PTR macro.
    	(pre_expr_reaches_here_p_work): Likewise.
    	* gimple-iterator.c (gsi_commit_edge_inserts): Likewise.
    	(gimple_find_edge_insert_loc): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	* gimple-ssa-strength-reduction.c (slsr_process_phi): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	* graph.c (draw_cfg_nodes_for_loop): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	* graphite-clast-to-gimple.c (translate_clast_user): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	* graphite-scop-detection.c (build_scops): Likewise.
    	(create_sese_edges): Remove usage of EXIT_BLOCK_PTR macro.
    	(scopdet_basic_block_info): Remove usage of ENTRY_BLOCK_PTR macro.
    	* haifa-sched.c (restore_bb_notes): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	(unlink_bb_notes): Likewise.
    	(create_check_block_twin): Likewise.
    	(init_before_recovery): Likewise.
    	(sched_extend_bb): Likewise.
    	(priority): Likewise.
    	* hw-doloop.c (reorder_loops): Likewise.
    	(discover_loop): Likewise.
    	* ifcvt.c (dead_or_predicable): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(find_if_case_1): Remove usage of EXIT_BLOCK_PTR macro.
    	(block_has_only_trap): Likewise.
    	(cond_exec_find_if_block): Likewise.
    	(merge_if_block): Likewise.
    	* ipa-inline-analysis.c (param_change_prob): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(record_modified): Likewise.
    	* ipa-pure-const.c (execute_warn_function_noreturn): Remove usage of
    	EXIT_BLOCK_PTR macro.
    	(local_pure_const): Likewise.
    	* ipa-split.c (split_function): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(find_split_points): Likewise.
    	(consider_split): Likewise.
    	(find_return_bb): Remove usage of EXIT_BLOCK_PTR macro.
    	(verify_non_ssa_vars): Remove usage of ENTRY_BLOCK_PTR macro.
    	* ira-build.c (ira_loop_tree_body_rev_postorder): Likewise.
    	* ira-color.c (print_loop_title): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	* ira-emit.c (entered_from_non_parent_p): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(ira_emit): Remove usage of EXIT_BLOCK_PTR macro.
    	* ira-int.h (ira_assert): Remove usage of ENTRY_BLOCK_PTR macro.
    	* ira.c (split_live_ranges_for_shrink_wrap): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	* lcm.c (compute_rev_insert_delete): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(compute_nearerout): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(compute_farthest): Likewise.
    	(compute_available): Likewise.
    	(compute_insert_delete): Remove usage of EXIT_BLOCK_PTR macro.
    	(compute_laterin): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(compute_earliest): Likewise.
    	(compute_antinout_edge): Likewise.
    	* loop-iv.c (simplify_using_initial_values): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	* loop-unswitch.c (unswitch_loop): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	* lra-assigns.c (find_hard_regno_for): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	* lra-constraints.c (lra_inheritance): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	* lra-lives.c (lra_create_live_ranges): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	* lra.c (has_nonexceptional_receiver): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	* lto-streamer-in.c (input_function): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	* lto-streamer-out.c (output_cfg): Likewise.
    	* mcf.c (adjust_cfg_counts): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(create_fixup_graph): Remove usage of ENTRY_BLOCK_PTR macro.
    	* mode-switching.c (optimize_mode_switching): Likewise.
    	(create_pre_exit): Remove usage of EXIT_BLOCK_PTR macro.
    	* modulo-sched.c (rest_of_handle_sms): Likewise.
    	(canon_loop): Likewise.
    	* omp-low.c (build_omp_regions): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	* postreload-gcse.c (eliminate_partially_redundant_loads): Remove uses
    	of macros: ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	* predict.c (rebuild_frequencies): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(propagate_freq): Remove usage of EXIT_BLOCK_PTR macro.
    	(estimate_bb_frequencies): Remove usage of ENTRY_BLOCK_PTR macro.
    	(tree_estimate_probability_bb): Remove usage of EXIT_BLOCK_PTR macro.
    	(expensive_function_p): Remove usage of ENTRY_BLOCK_PTR macro.
    	(tree_bb_level_predictions): Remove usage of EXIT_BLOCK_PTR macro.
    	(counts_to_freqs): Remove usage of ENTRY_BLOCK_PTR macro.
    	(apply_return_prediction): Remove usage of EXIT_BLOCK_PTR macro.
    	(estimate_loops): Remove usage of ENTRY_BLOCK_PTR macro.
    	(gimple_predict_edge): Likewise.
    	(probably_never_executed): Likewise.
    	* profile.c (find_spanning_tree): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(branch_prob): Likewise.
    	(compute_branch_probabilities): Likewise.
    	(compute_frequency_overlap): Remove usage of ENTRY_BLOCK_PTR macro.
    	(is_inconsistent): Remove usage of EXIT_BLOCK_PTR macro.
    	(read_profile_edge_counts): Remove usage of ENTRY_BLOCK_PTR macro.
    	(set_bb_counts): Likewise.
    	(correct_negative_edge_counts): Likewise.
    	(get_exec_counts): Likewise.
    	(instrument_values): Likewise.
    	(instrument_edges): Likewise.
    	* reg-stack.c (convert_regs): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(compensate_edges): Remove usage of ENTRY_BLOCK_PTR macro.
    	(convert_regs_exit): Remove usage of EXIT_BLOCK_PTR macro.
    	(convert_regs_entry): Remove usage of ENTRY_BLOCK_PTR macro.
    	(reg_to_stack): Likewise.
    	* regs.h (REG_N_SETS): Likewise.
    	* reload.c (find_dummy_reload): Likewise.
    	(combine_reloads): Likewise.
    	(push_reload): Likewise.
    	* reload1.c (has_nonexceptional_receiver): Remove usage of
    	EXIT_BLOCK_PTR macro.
    	* resource.c (mark_target_live_regs): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(find_basic_block): Likewise.
    	* sched-ebb.c (ebb_add_block): Remove usage of EXIT_BLOCK_PTR macro.
    	(schedule_ebbs): Likewise.
    	* sched-int.h (sel_sched_p): Likewise.
    	* sched-rgn.c (compute_dom_prob_ps): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(rgn_add_block): Remove usage of EXIT_BLOCK_PTR macro.
    	(haifa_find_rgns): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(propagate_deps): Remove usage of EXIT_BLOCK_PTR macro.
    	(extend_rgns): Likewise.
    	(find_single_block_region): Likewise.
    	* sel-sched-ir.c (sel_remove_loop_preheader): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(setup_nop_and_exit_insns): Remove usage of EXIT_BLOCK_PTR macro.
    	(sel_create_recovery_block): Likewise.
    	(bb_ends_ebb_p): Likewise.
    	(sel_bb_end): Likewise.
    	(sel_bb_head): Likewise.
    	(free_lv_sets): Likewise.
    	(init_lv_sets): Likewise.
    	(tidy_control_flow): Likewise.
    	(maybe_tidy_empty_bb): Likewise.
    	* sel-sched-ir.h (_succ_iter_cond): Likewise.
    	(_succ_iter_start): Likewise.
    	(sel_bb_empty_or_nop_p): Likewise.
    	(get_loop_exit_edges_unique_dests): Likewise.
    	(inner_loop_header_p): Likewise.
    	* sel-sched.c (create_block_for_bookkeeping): Likewise.
    	(find_block_for_bookkeeping): Likewise.
    	* store-motion.c (remove_reachable_equiv_notes): Likewise.
    	(insert_store): Likewise.
    	* trans-mem.c (ipa_tm_transform_clone): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(tm_memopt_compute_available): Remove usage of EXIT_BLOCK_PTR macro.
    	(ipa_tm_scan_irr_function): Remove usage of ENTRY_BLOCK_PTR macro.
    	(gate_tm_init): Likewise.
    	(tm_region_init): Likewise.
    	* tree-cfg.c (execute_fixup_cfg): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(execute_warn_function_return): Remove usage of EXIT_BLOCK_PTR macro.
    	(split_critical_edges): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(print_loops): Remove usage of ENTRY_BLOCK_PTR macro.
    	(move_sese_region_to_fn): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(gimple_redirect_edge_and_branch): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(gimple_verify_flow_info): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(remove_edge_and_dominated_blocks): Remove usage of EXIT_BLOCK_PTR
    	macro.
    	(make_edges): Remove uses of macros: ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(gimple_flow_call_edges_add): Remove usage of EXIT_BLOCK_PTR macro.
    	(make_blocks): Remove usage of ENTRY_BLOCK_PTR macro.
    	(build_gimple_cfg): Likewise.
    	(gimple_duplicate_bb): Remove usage of EXIT_BLOCK_PTR macro.
    	(gimple_can_merge_blocks_p): Likewise.
    	* tree-cfgcleanup.c (tree_forwarder_block_p): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	* tree-complex.c (update_parameter_components): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	* tree-if-conv.c (get_loop_body_in_if_conv_order): Remove usage of
    	EXIT_BLOCK_PTR macro.
    	* tree-inline.c (tree_function_versioning): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(delete_unreachable_blocks_update_callgraph): Likewise.
    	(initialize_cfun): Likewise.
    	(copy_cfg_body): Remove usage of ENTRY_BLOCK_PTR macro.
    	(copy_edges_for_bb): Remove usage of EXIT_BLOCK_PTR macro.
    	(remap_ssa_name): Remove usage of ENTRY_BLOCK_PTR macro.
    	* tree-into-ssa.c (update_ssa): Likewise.
    	(maybe_register_def): Remove usage of EXIT_BLOCK_PTR macro.
    	(insert_updated_phi_nodes_for): Remove usage of ENTRY_BLOCK_PTR macro.
    	(rewrite_into_ssa): Likewise.
    	(rewrite_debug_stmt_uses): Likewise.
    	* tree-outof-ssa.c (expand_phi_nodes): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	* tree-profile.c (gimple_gen_ic_func_profiler): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	* tree-scalar-evolution.h (block_before_loop): Likewise.
    	* tree-sra.c (sra_ipa_reset_debug_stmts): Likewise.
    	(dump_dereferences_table): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(analyze_caller_dereference_legality): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(propagate_dereference_distances): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(initialize_parameter_reductions): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	* tree-ssa-ccp.c (gsi_prev_dom_bb_nondebug): Likewise.
    	(optimize_stack_restore): Remove usage of EXIT_BLOCK_PTR macro.
    	* tree-ssa-coalesce.c (create_outofssa_var_map): Likewise.
    	* tree-ssa-dce.c (eliminate_unnecessary_stmts): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(remove_dead_stmt): Remove usage of EXIT_BLOCK_PTR macro.
    	(propagate_necessity): Remove usage of ENTRY_BLOCK_PTR macro.
    	(mark_control_dependent_edges_necessary): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	* tree-ssa-dom.c (eliminate_degenerate_phis): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	(tree_ssa_dominator_optimize): Remove usage of EXIT_BLOCK_PTR macro.
    	* tree-ssa-live.c (verify_live_on_entry): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(calculate_live_on_exit): Likewise.
    	(set_var_live_on_entry): Remove usage of ENTRY_BLOCK_PTR macro.
    	(loe_visit_block): Likewise.
    	* tree-ssa-live.h (live_on_exit): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(live_on_entry): Likewise.
    	* tree-ssa-loop-ivopts.c (find_interesting_uses): Remove usage of
    	EXIT_BLOCK_PTR macro.
    	* tree-ssa-loop-manip.c (compute_live_loop_exits): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	* tree-ssa-loop-niter.c (simplify_using_initial_conditions): Likewise.
    	(bound_difference): Likewise.
    	* tree-ssa-loop-prefetch.c (may_use_storent_in_loop_p): Remove usage
    	of EXIT_BLOCK_PTR macro.
    	* tree-ssa-loop-unswitch.c (simplify_using_entry_checks): Remove usage
    	of ENTRY_BLOCK_PTR macro.
    	* tree-ssa-math-opts.c (register_division_in): Likewise.
    	* tree-ssa-phiprop.c (tree_ssa_phiprop): Likewise.
    	* tree-ssa-pre.c (compute_avail): Likewise.
    	(compute_antic): Remove usage of EXIT_BLOCK_PTR macro.
    	(insert): Remove usage of ENTRY_BLOCK_PTR macro.
    	* tree-ssa-propagate.c (ssa_prop_init): Likewise.
    	(simulate_block): Remove usage of EXIT_BLOCK_PTR macro.
    	(cfg_blocks_add): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	(add_control_edge): Remove usage of EXIT_BLOCK_PTR macro.
    	* tree-ssa-reassoc.c (do_reassoc): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(build_and_add_sum): Remove usage of ENTRY_BLOCK_PTR macro.
    	* tree-ssa-sink.c (nearest_common_dominator_of_uses): Likewise.
    	(execute_sink_code): Remove usage of EXIT_BLOCK_PTR macro.
    	* tree-ssa-uninit.c (find_dom): Remove usage of ENTRY_BLOCK_PTR macro.
    	(compute_control_dep_chain): Remove usage of EXIT_BLOCK_PTR macro.
    	(find_pdom): Likewise.
    	(warn_uninitialized_vars): Remove usage of ENTRY_BLOCK_PTR macro.
    	* tree-stdarg.c (reachable_at_most_once): Likewise.
    	* tree-tailcall.c (tree_optimize_tail_calls_1): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(eliminate_tail_call): Likewise.
    	* tsan.c (instrument_func_entry): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	(instrument_func_exit): Remove usage of EXIT_BLOCK_PTR macro.
    	* var-tracking.c (vt_initialize): Remove uses of macros:
    	ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR.
    	(vt_add_function_parameter): Remove usage of ENTRY_BLOCK_PTR macro.
    	(vt_find_locations): Remove usage of EXIT_BLOCK_PTR macro.
    	(vt_stack_adjustments): Remove uses of macros: ENTRY_BLOCK_PTR,
    	EXIT_BLOCK_PTR.
    	* varasm.c (assemble_start_function): Remove usage of ENTRY_BLOCK_PTR
    	macro.
    	* config/bfin/bfin.c (hwloop_optimize): Likewise.
    	* config/nds32/nds32.c (nds32_fp_as_gp_check_available): Remove usage
    	of EXIT_BLOCK_PTR macro.
    	* config/arm/arm.c (require_pic_register): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	* config/arm/arm.c (arm_r3_live_at_start_p): Likewise.
    	(any_sibcall_could_use_r3): Remove usage of EXIT_BLOCK_PTR macro.
    	* config/rs6000/rs6000.c (rs6000_emit_prologue): Likewise.
    	* config/frv/frv.c (frv_optimize_membar_global): Likewise.
    	* config/alpha/alpha.c (alpha_gp_save_rtx): Remove usage of
    	ENTRY_BLOCK_PTR macro.
    	* config/i386/i386.c (ix86_count_insn): Likewise.
    	(ix86_seh_fixup_eh_fallthru): Remove usage of EXIT_BLOCK_PTR macro.
    	(ix86_pad_short_function): Likewise.
    	(ix86_compute_frame_layout): Remove usage of ENTRY_BLOCK_PTR macro.
    	(ix86_pad_returns): Remove usage of EXIT_BLOCK_PTR macro.
    	(ix86_eax_live_at_start_p): Remove usage of ENTRY_BLOCK_PTR macro.
    	(add_condition_to_bb): Remove usage of EXIT_BLOCK_PTR macro.
    	(ix86_expand_epilogue): Likewise.
    	* config/ia64/ia64.c (ia64_asm_unwind_emit): Likewise.
    	(ia64_expand_prologue): Likewise.

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 38391be..58bacc3 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -312,8 +312,8 @@ struct GTY(()) control_flow_graph {
 };
 
 /* Defines for accessing the fields of the CFG structure for function FN.  */
-#define ENTRY_BLOCK_PTR_FOR_FUNCTION(FN)     ((FN)->cfg->x_entry_block_ptr)
-#define EXIT_BLOCK_PTR_FOR_FUNCTION(FN)	     ((FN)->cfg->x_exit_block_ptr)
+#define ENTRY_BLOCK_PTR_FOR_FN(FN)	     ((FN)->cfg->x_entry_block_ptr)
+#define EXIT_BLOCK_PTR_FOR_FN(FN)	     ((FN)->cfg->x_exit_block_ptr)
 #define basic_block_info_for_function(FN)    ((FN)->cfg->x_basic_block_info)
 #define n_basic_blocks_for_fn(FN)	     ((FN)->cfg->x_n_basic_blocks)
 #define n_edges_for_fn(FN)		     ((FN)->cfg->x_n_edges)
@@ -327,8 +327,6 @@ struct GTY(()) control_flow_graph {
   ((*basic_block_info_for_function (FN))[(N)] = (BB))
 
 /* Defines for textual backward source compatibility.  */
-#define ENTRY_BLOCK_PTR		(cfun->cfg->x_entry_block_ptr)
-#define EXIT_BLOCK_PTR		(cfun->cfg->x_exit_block_ptr)
 #define basic_block_info	(cfun->cfg->x_basic_block_info)
 #define last_basic_block	(cfun->cfg->x_last_basic_block)
 #define label_to_block_map	(cfun->cfg->x_label_to_block_map)
@@ -378,10 +376,10 @@ struct GTY(()) control_flow_graph {
    exit block).  */
 
 #define FOR_ALL_BB(BB) \
-  for (BB = ENTRY_BLOCK_PTR; BB; BB = BB->next_bb)
+  for (BB = ENTRY_BLOCK_PTR_FOR_FN (cfun); BB; BB = BB->next_bb)
 
 #define FOR_ALL_BB_FN(BB, FN) \
-  for (BB = ENTRY_BLOCK_PTR_FOR_FUNCTION (FN); BB; BB = BB->next_bb)
+  for (BB = ENTRY_BLOCK_PTR_FOR_FN (FN); BB; BB = BB->next_bb)
 
 \f
 /* Stuff for recording basic block info.  */
diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index 45bf128..fc7b5b7 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -275,7 +275,7 @@ find_traces (int *n_traces, struct trace *traces)
   heap = fibheap_new ();
   max_entry_frequency = 0;
   max_entry_count = 0;
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     {
       bbd[e->dest->index].heap = heap;
       bbd[e->dest->index].node = fibheap_insert (heap, bb_to_key (e->dest),
@@ -348,7 +348,7 @@ rotate_loop (edge back_edge, struct trace *trace, int trace_n)
       edge_iterator ei;
 
       FOR_EACH_EDGE (e, ei, bb->succs)
-	if (e->dest != EXIT_BLOCK_PTR
+	if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	    && bb_visited_trace (e->dest) != trace_n
 	    && (e->flags & EDGE_CAN_FALLTHRU)
 	    && !(e->flags & EDGE_COMPLEX))
@@ -524,7 +524,7 @@ find_traces_1_round (int branch_th, int exec_th, gcov_type count_th,
 	    {
 	      gcc_assert (!(e->flags & EDGE_FAKE));
 
-	      if (e->dest == EXIT_BLOCK_PTR)
+	      if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 		continue;
 
 	      if (bb_visited_trace (e->dest)
@@ -605,7 +605,7 @@ find_traces_1_round (int branch_th, int exec_th, gcov_type count_th,
 	  FOR_EACH_EDGE (e, ei, bb->succs)
 	    {
 	      if (e == best_edge
-		  || e->dest == EXIT_BLOCK_PTR
+		  || e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 		  || bb_visited_trace (e->dest))
 		continue;
 
@@ -680,7 +680,8 @@ find_traces_1_round (int branch_th, int exec_th, gcov_type count_th,
 			     header is not the first block of the function
 			     we can rotate the loop.  */
 
-			  if (best_edge->dest != ENTRY_BLOCK_PTR->next_bb)
+			  if (best_edge->dest
+			      != ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb)
 			    {
 			      if (dump_file)
 				{
@@ -776,7 +777,7 @@ find_traces_1_round (int branch_th, int exec_th, gcov_type count_th,
 	 is an end of the trace).  */
       FOR_EACH_EDGE (e, ei, bb->succs)
 	{
-	  if (e->dest == EXIT_BLOCK_PTR
+	  if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	      || bb_visited_trace (e->dest))
 	    continue;
 
@@ -885,7 +886,8 @@ bb_to_key (basic_block bb)
      or whose predecessor edge is EDGE_DFS_BACK.  */
   FOR_EACH_EDGE (e, ei, bb->preds)
     {
-      if ((e->src != ENTRY_BLOCK_PTR && bbd[e->src->index].end_of_trace >= 0)
+      if ((e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	   && bbd[e->src->index].end_of_trace >= 0)
 	  || (e->flags & EDGE_DFS_BACK))
 	{
 	  int edge_freq = EDGE_FREQUENCY (e);
@@ -1098,7 +1100,7 @@ connect_traces (int n_traces, struct trace *traces)
 	    {
 	      int si = e->src->index;
 
-	      if (e->src != ENTRY_BLOCK_PTR
+	      if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 		  && (e->flags & EDGE_CAN_FALLTHRU)
 		  && !(e->flags & EDGE_COMPLEX)
 		  && bbd[si].end_of_trace >= 0
@@ -1141,7 +1143,7 @@ connect_traces (int n_traces, struct trace *traces)
 	    {
 	      int di = e->dest->index;
 
-	      if (e->dest != EXIT_BLOCK_PTR
+	      if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 		  && (e->flags & EDGE_CAN_FALLTHRU)
 		  && !(e->flags & EDGE_COMPLEX)
 		  && bbd[di].start_of_trace >= 0
@@ -1212,7 +1214,7 @@ connect_traces (int n_traces, struct trace *traces)
 	      bool try_copy = false;
 
 	      FOR_EACH_EDGE (e, ei, traces[t].last->succs)
-		if (e->dest != EXIT_BLOCK_PTR
+		if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 		    && (e->flags & EDGE_CAN_FALLTHRU)
 		    && !(e->flags & EDGE_COMPLEX)
 		    && (!best || e->probability > best->probability))
@@ -1237,7 +1239,7 @@ connect_traces (int n_traces, struct trace *traces)
 		      {
 			int di = e2->dest->index;
 
-			if (e2->dest == EXIT_BLOCK_PTR
+			if (e2->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 			    || ((e2->flags & EDGE_CAN_FALLTHRU)
 				&& !(e2->flags & EDGE_COMPLEX)
 				&& bbd[di].start_of_trace >= 0
@@ -1253,7 +1255,7 @@ connect_traces (int n_traces, struct trace *traces)
 			  {
 			    best = e;
 			    best2 = e2;
-			    if (e2->dest != EXIT_BLOCK_PTR)
+			    if (e2->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 			      best2_len = traces[bbd[di].start_of_trace].length;
 			    else
 			      best2_len = INT_MAX;
@@ -1282,7 +1284,7 @@ connect_traces (int n_traces, struct trace *traces)
 			       traces[t].last->index, best->dest->index);
 		      if (!next_bb)
 			fputc ('\n', dump_file);
-		      else if (next_bb == EXIT_BLOCK_PTR)
+		      else if (next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 			fprintf (dump_file, "exit\n");
 		      else
 			fprintf (dump_file, "%d\n", next_bb->index);
@@ -1290,7 +1292,7 @@ connect_traces (int n_traces, struct trace *traces)
 
 		  new_bb = copy_bb (best->dest, best, traces[t].last, t);
 		  traces[t].last = new_bb;
-		  if (next_bb && next_bb != EXIT_BLOCK_PTR)
+		  if (next_bb && next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 		    {
 		      t = bbd[next_bb->index].start_of_trace;
 		      traces[last_trace].last->aux = traces[t].first;
@@ -1413,7 +1415,7 @@ fix_up_crossing_landing_pad (eh_landing_pad old_lp, basic_block old_bb)
   JUMP_LABEL (jump) = post_label;
 
   /* Create new basic block to be dest for lp.  */
-  last_bb = EXIT_BLOCK_PTR->prev_bb;
+  last_bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
   new_bb = create_basic_block (new_label, jump, last_bb);
   new_bb->aux = last_bb->aux;
   last_bb->aux = new_bb;
@@ -1663,8 +1665,8 @@ find_rarely_executed_basic_blocks_and_crossing_edges (void)
         /* We should never have EDGE_CROSSING set yet.  */
 	gcc_checking_assert ((flags & EDGE_CROSSING) == 0);
 
-	if (e->src != ENTRY_BLOCK_PTR
-	    && e->dest != EXIT_BLOCK_PTR
+	if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	    && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	    && BB_PARTITION (e->src) != BB_PARTITION (e->dest))
 	  {
 	    crossing_edges.safe_push (e);
@@ -1731,14 +1733,14 @@ add_labels_and_missing_jumps (vec<edge> crossing_edges)
       basic_block dest = e->dest;
       rtx label, new_jump;
 
-      if (dest == EXIT_BLOCK_PTR)
+      if (dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	continue;
 
       /* Make sure dest has a label.  */
       label = block_label (dest);
 
       /* Nothing to do for non-fallthru edges.  */
-      if (src == ENTRY_BLOCK_PTR)
+      if (src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	continue;
       if ((e->flags & EDGE_FALLTHRU) == 0)
 	continue;
@@ -1832,7 +1834,7 @@ fix_up_fall_thru_edges (void)
 	      }
 	}
 
-      if (fall_thru && (fall_thru->dest != EXIT_BLOCK_PTR))
+      if (fall_thru && (fall_thru->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)))
 	{
 	  /* Check to see if the fall-thru edge is a crossing edge.  */
 
@@ -2066,7 +2068,7 @@ fix_crossing_conditional_branches (void)
 		  new_jump = emit_jump_insn (gen_jump (old_label));
 		  JUMP_LABEL (new_jump) = old_label;
 
-		  last_bb = EXIT_BLOCK_PTR->prev_bb;
+		  last_bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
 		  new_bb = create_basic_block (new_label, new_jump, last_bb);
 		  new_bb->aux = last_bb->aux;
 		  last_bb->aux = new_bb;
@@ -2319,7 +2321,7 @@ rest_of_handle_reorder_blocks (void)
   cleanup_cfg (CLEANUP_EXPENSIVE);
 
   FOR_EACH_BB (bb)
-    if (bb->next_bb != EXIT_BLOCK_PTR)
+    if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
       bb->aux = bb->next_bb;
   cfg_layout_finalize ();
 
@@ -2415,7 +2417,7 @@ duplicate_computed_gotos (void)
       int size, all_flags;
 
       /* Build the reorder chain for the original order of blocks.  */
-      if (bb->next_bb != EXIT_BLOCK_PTR)
+      if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	bb->aux = bb->next_bb;
 
       /* Obviously the block has to end in a computed jump.  */
@@ -2465,7 +2467,7 @@ duplicate_computed_gotos (void)
 	 the exit block or the next block.
 	 The destination must have more than one predecessor.  */
       if (!single_succ_p (bb)
-	  || single_succ (bb) == EXIT_BLOCK_PTR
+	  || single_succ (bb) == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  || single_succ (bb) == bb->next_bb
 	  || single_pred_p (single_succ (bb)))
 	continue;
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index 348e40b..09eea06 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -1328,7 +1328,8 @@ migrate_btr_def (btr_def def, int min_cost)
   def_basic_block_freq = basic_block_freq (def->bb);
 
   for (attempt = get_immediate_dominator (CDI_DOMINATORS, def->bb);
-       !give_up && attempt && attempt != ENTRY_BLOCK_PTR && def->cost >= min_cost;
+       !give_up && attempt && attempt != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+       && def->cost >= min_cost;
        attempt = get_immediate_dominator (CDI_DOMINATORS, attempt))
     {
       /* Try to move the instruction that sets the target register into
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 166ad38..e35eee9 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -70,16 +70,16 @@ init_flow (struct function *the_fun)
   if (!the_fun->cfg)
     the_fun->cfg = ggc_alloc_cleared_control_flow_graph ();
   n_edges_for_fn (the_fun) = 0;
-  ENTRY_BLOCK_PTR_FOR_FUNCTION (the_fun)
+  ENTRY_BLOCK_PTR_FOR_FN (the_fun)
     = ggc_alloc_cleared_basic_block_def ();
-  ENTRY_BLOCK_PTR_FOR_FUNCTION (the_fun)->index = ENTRY_BLOCK;
-  EXIT_BLOCK_PTR_FOR_FUNCTION (the_fun)
+  ENTRY_BLOCK_PTR_FOR_FN (the_fun)->index = ENTRY_BLOCK;
+  EXIT_BLOCK_PTR_FOR_FN (the_fun)
     = ggc_alloc_cleared_basic_block_def ();
-  EXIT_BLOCK_PTR_FOR_FUNCTION (the_fun)->index = EXIT_BLOCK;
-  ENTRY_BLOCK_PTR_FOR_FUNCTION (the_fun)->next_bb
-    = EXIT_BLOCK_PTR_FOR_FUNCTION (the_fun);
-  EXIT_BLOCK_PTR_FOR_FUNCTION (the_fun)->prev_bb
-    = ENTRY_BLOCK_PTR_FOR_FUNCTION (the_fun);
+  EXIT_BLOCK_PTR_FOR_FN (the_fun)->index = EXIT_BLOCK;
+  ENTRY_BLOCK_PTR_FOR_FN (the_fun)->next_bb
+    = EXIT_BLOCK_PTR_FOR_FN (the_fun);
+  EXIT_BLOCK_PTR_FOR_FN (the_fun)->prev_bb
+    = ENTRY_BLOCK_PTR_FOR_FN (the_fun);
 }
 \f
 /* Helper function for remove_edge and clear_edges.  Frees edge structure
@@ -109,10 +109,10 @@ clear_edges (void)
       vec_safe_truncate (bb->preds, 0);
     }
 
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     free_edge (e);
-  vec_safe_truncate (EXIT_BLOCK_PTR->preds, 0);
-  vec_safe_truncate (ENTRY_BLOCK_PTR->succs, 0);
+  vec_safe_truncate (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds, 0);
+  vec_safe_truncate (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs, 0);
 
   gcc_assert (!n_edges_for_fn (cfun));
 }
@@ -153,8 +153,8 @@ compact_blocks (void)
 {
   int i;
 
-  SET_BASIC_BLOCK (ENTRY_BLOCK, ENTRY_BLOCK_PTR);
-  SET_BASIC_BLOCK (EXIT_BLOCK, EXIT_BLOCK_PTR);
+  SET_BASIC_BLOCK (ENTRY_BLOCK, ENTRY_BLOCK_PTR_FOR_FN (cfun));
+  SET_BASIC_BLOCK (EXIT_BLOCK, EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   if (df)
     df_compact_blocks ();
@@ -282,8 +282,8 @@ edge
 cached_make_edge (sbitmap edge_cache, basic_block src, basic_block dst, int flags)
 {
   if (edge_cache == NULL
-      || src == ENTRY_BLOCK_PTR
-      || dst == EXIT_BLOCK_PTR)
+      || src == ENTRY_BLOCK_PTR_FOR_FN (cfun)
+      || dst == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return make_edge (src, dst, flags);
 
   /* Does the requested edge already exist?  */
@@ -387,7 +387,7 @@ clear_bb_flags (void)
 {
   basic_block bb;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     bb->flags &= BB_FLAGS_TO_PRESERVE;
 }
 \f
@@ -411,7 +411,7 @@ check_bb_profile (basic_block bb, FILE * file, int indent, int flags)
   if (profile_status_for_function (fun) == PROFILE_ABSENT)
     return;
 
-  if (bb != EXIT_BLOCK_PTR_FOR_FUNCTION (fun))
+  if (bb != EXIT_BLOCK_PTR_FOR_FN (fun))
     {
       FOR_EACH_EDGE (e, ei, bb->succs)
 	sum += e->probability;
@@ -428,7 +428,7 @@ check_bb_profile (basic_block bb, FILE * file, int indent, int flags)
 		 (flags & TDF_COMMENT) ? ";; " : "", s_indent,
 		 (int) lsum, (int) bb->count);
     }
-    if (bb != ENTRY_BLOCK_PTR_FOR_FUNCTION (fun))
+    if (bb != ENTRY_BLOCK_PTR_FOR_FN (fun))
     {
       sum = 0;
       FOR_EACH_EDGE (e, ei, bb->preds)
@@ -641,7 +641,8 @@ alloc_aux_for_edges (int size)
     {
       basic_block bb;
 
-      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		      EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
 	{
 	  edge e;
 	  edge_iterator ei;
@@ -660,7 +661,8 @@ clear_aux_for_edges (void)
   basic_block bb;
   edge e;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       edge_iterator ei;
       FOR_EACH_EDGE (e, ei, bb->succs)
diff --git a/gcc/cfganal.c b/gcc/cfganal.c
index 1c90f8c..30376b3 100644
--- a/gcc/cfganal.c
+++ b/gcc/cfganal.c
@@ -86,7 +86,7 @@ mark_dfs_back_edges (void)
   bitmap_clear (visited);
 
   /* Push the first edge on to the stack.  */
-  stack[sp++] = ei_start (ENTRY_BLOCK_PTR->succs);
+  stack[sp++] = ei_start (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs);
 
   while (sp)
     {
@@ -101,7 +101,8 @@ mark_dfs_back_edges (void)
       ei_edge (ei)->flags &= ~EDGE_DFS_BACK;
 
       /* Check if the edge destination has been visited yet.  */
-      if (dest != EXIT_BLOCK_PTR && ! bitmap_bit_p (visited, dest->index))
+      if (dest != EXIT_BLOCK_PTR_FOR_FN (cfun) && ! bitmap_bit_p (visited,
+								  dest->index))
 	{
 	  /* Mark that we have visited the destination.  */
 	  bitmap_set_bit (visited, dest->index);
@@ -118,12 +119,14 @@ mark_dfs_back_edges (void)
 	}
       else
 	{
-	  if (dest != EXIT_BLOCK_PTR && src != ENTRY_BLOCK_PTR
+	  if (dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
+	      && src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	      && pre[src->index] >= pre[dest->index]
 	      && post[dest->index] == 0)
 	    ei_edge (ei)->flags |= EDGE_DFS_BACK, found = true;
 
-	  if (ei_one_before_end_p (ei) && src != ENTRY_BLOCK_PTR)
+	  if (ei_one_before_end_p (ei)
+	      && src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    post[src->index] = postnum++;
 
 	  if (!ei_one_before_end_p (ei))
@@ -163,7 +166,7 @@ find_unreachable_blocks (void)
      be only one.  It isn't inconceivable that we might one day directly
      support Fortran alternate entry points.  */
 
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     {
       *tos++ = e->dest;
 
@@ -217,7 +220,8 @@ create_edge_list (void)
   /* Determine the number of edges in the flow graph by counting successor
      edges on each basic block.  */
   num_edges = 0;
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       num_edges += EDGE_COUNT (bb->succs);
     }
@@ -229,7 +233,8 @@ create_edge_list (void)
   num_edges = 0;
 
   /* Follow successors of blocks, and register these edges.  */
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     FOR_EACH_EDGE (e, ei, bb->succs)
       elist->index_to_edge[num_edges++] = e;
 
@@ -261,12 +266,12 @@ print_edge_list (FILE *f, struct edge_list *elist)
   for (x = 0; x < elist->num_edges; x++)
     {
       fprintf (f, " %-4d - edge(", x);
-      if (INDEX_EDGE_PRED_BB (elist, x) == ENTRY_BLOCK_PTR)
+      if (INDEX_EDGE_PRED_BB (elist, x) == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	fprintf (f, "entry,");
       else
 	fprintf (f, "%d,", INDEX_EDGE_PRED_BB (elist, x)->index);
 
-      if (INDEX_EDGE_SUCC_BB (elist, x) == EXIT_BLOCK_PTR)
+      if (INDEX_EDGE_SUCC_BB (elist, x) == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	fprintf (f, "exit)\n");
       else
 	fprintf (f, "%d)\n", INDEX_EDGE_SUCC_BB (elist, x)->index);
@@ -285,7 +290,8 @@ verify_edge_list (FILE *f, struct edge_list *elist)
   basic_block bb, p, s;
   edge_iterator ei;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       FOR_EACH_EDGE (e, ei, bb->succs)
 	{
@@ -310,8 +316,9 @@ verify_edge_list (FILE *f, struct edge_list *elist)
   /* We've verified that all the edges are in the list, now lets make sure
      there are no spurious edges in the list.  This is an expensive check!  */
 
-  FOR_BB_BETWEEN (p, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
-    FOR_BB_BETWEEN (s, ENTRY_BLOCK_PTR->next_bb, NULL, next_bb)
+  FOR_BB_BETWEEN (p, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
+    FOR_BB_BETWEEN (s, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb, NULL, next_bb)
       {
 	int found_edge = 0;
 
@@ -348,9 +355,9 @@ void
 control_dependences::set_control_dependence_map_bit (basic_block bb,
 						     int edge_index)
 {
-  if (bb == ENTRY_BLOCK_PTR)
+  if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
     return;
-  gcc_assert (bb != EXIT_BLOCK_PTR);
+  gcc_assert (bb != EXIT_BLOCK_PTR_FOR_FN (cfun));
   bitmap_set_bit (control_dependence_map[bb->index], edge_index);
 }
 
@@ -367,15 +374,15 @@ control_dependences::clear_control_dependence_bitmap (basic_block bb)
 static inline basic_block
 find_pdom (basic_block block)
 {
-  gcc_assert (block != ENTRY_BLOCK_PTR);
+  gcc_assert (block != ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
-  if (block == EXIT_BLOCK_PTR)
-    return EXIT_BLOCK_PTR;
+  if (block == EXIT_BLOCK_PTR_FOR_FN (cfun))
+    return EXIT_BLOCK_PTR_FOR_FN (cfun);
   else
     {
       basic_block bb = get_immediate_dominator (CDI_POST_DOMINATORS, block);
       if (! bb)
-	return EXIT_BLOCK_PTR;
+	return EXIT_BLOCK_PTR_FOR_FN (cfun);
       return bb;
     }
 }
@@ -389,15 +396,17 @@ control_dependences::find_control_dependence (int edge_index)
   basic_block current_block;
   basic_block ending_block;
 
-  gcc_assert (INDEX_EDGE_PRED_BB (m_el, edge_index) != EXIT_BLOCK_PTR);
+  gcc_assert (INDEX_EDGE_PRED_BB (m_el, edge_index)
+	      != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
-  if (INDEX_EDGE_PRED_BB (m_el, edge_index) == ENTRY_BLOCK_PTR)
-    ending_block = single_succ (ENTRY_BLOCK_PTR);
+  if (INDEX_EDGE_PRED_BB (m_el, edge_index) == ENTRY_BLOCK_PTR_FOR_FN (cfun))
+    ending_block = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   else
     ending_block = find_pdom (INDEX_EDGE_PRED_BB (m_el, edge_index));
 
   for (current_block = INDEX_EDGE_SUCC_BB (m_el, edge_index);
-       current_block != ending_block && current_block != EXIT_BLOCK_PTR;
+       current_block != ending_block
+       && current_block != EXIT_BLOCK_PTR_FOR_FN (cfun);
        current_block = find_pdom (current_block))
     {
       edge e = INDEX_EDGE (m_el, edge_index);
@@ -523,7 +532,7 @@ remove_fake_edges (void)
 {
   basic_block bb;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb, NULL, next_bb)
     remove_fake_predecessors (bb);
 }
 
@@ -532,7 +541,7 @@ remove_fake_edges (void)
 void
 remove_fake_exit_edges (void)
 {
-  remove_fake_predecessors (EXIT_BLOCK_PTR);
+  remove_fake_predecessors (EXIT_BLOCK_PTR_FOR_FN (cfun));
 }
 
 
@@ -547,7 +556,7 @@ add_noreturn_fake_exit_edges (void)
 
   FOR_EACH_BB (bb)
     if (EDGE_COUNT (bb->succs) == 0)
-      make_single_succ_edge (bb, EXIT_BLOCK_PTR, EDGE_FAKE);
+      make_single_succ_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FAKE);
 }
 
 /* This function adds a fake edge between any infinite loops to the
@@ -564,14 +573,14 @@ add_noreturn_fake_exit_edges (void)
 void
 connect_infinite_loops_to_exit (void)
 {
-  basic_block unvisited_block = EXIT_BLOCK_PTR;
+  basic_block unvisited_block = EXIT_BLOCK_PTR_FOR_FN (cfun);
   basic_block deadend_block;
   struct depth_first_search_dsS dfs_ds;
 
   /* Perform depth-first search in the reverse graph to find nodes
      reachable from the exit block.  */
   flow_dfs_compute_reverse_init (&dfs_ds);
-  flow_dfs_compute_reverse_add_bb (&dfs_ds, EXIT_BLOCK_PTR);
+  flow_dfs_compute_reverse_add_bb (&dfs_ds, EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   /* Repeatedly add fake edges, updating the unreachable nodes.  */
   while (1)
@@ -582,7 +591,7 @@ connect_infinite_loops_to_exit (void)
 	break;
 
       deadend_block = dfs_find_deadend (unvisited_block);
-      make_edge (deadend_block, EXIT_BLOCK_PTR, EDGE_FAKE);
+      make_edge (deadend_block, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FAKE);
       flow_dfs_compute_reverse_add_bb (&dfs_ds, deadend_block);
     }
 
@@ -619,7 +628,7 @@ post_order_compute (int *post_order, bool include_entry_exit,
   bitmap_clear (visited);
 
   /* Push the first edge on to the stack.  */
-  stack[sp++] = ei_start (ENTRY_BLOCK_PTR->succs);
+  stack[sp++] = ei_start (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs);
 
   while (sp)
     {
@@ -633,7 +642,8 @@ post_order_compute (int *post_order, bool include_entry_exit,
       dest = ei_edge (ei)->dest;
 
       /* Check if the edge destination has been visited yet.  */
-      if (dest != EXIT_BLOCK_PTR && ! bitmap_bit_p (visited, dest->index))
+      if (dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
+	  && ! bitmap_bit_p (visited, dest->index))
 	{
 	  /* Mark that we have visited the destination.  */
 	  bitmap_set_bit (visited, dest->index);
@@ -647,7 +657,8 @@ post_order_compute (int *post_order, bool include_entry_exit,
 	}
       else
 	{
-	  if (ei_one_before_end_p (ei) && src != ENTRY_BLOCK_PTR)
+	  if (ei_one_before_end_p (ei)
+	      && src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    post_order[post_order_num++] = src->index;
 
 	  if (!ei_one_before_end_p (ei))
@@ -671,7 +682,8 @@ post_order_compute (int *post_order, bool include_entry_exit,
     {
       basic_block b;
       basic_block next_bb;
-      for (b = ENTRY_BLOCK_PTR->next_bb; b != EXIT_BLOCK_PTR; b = next_bb)
+      for (b = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb; b
+	   != EXIT_BLOCK_PTR_FOR_FN (cfun); b = next_bb)
 	{
 	  next_bb = b->next_bb;
 
@@ -813,7 +825,8 @@ inverted_post_order_compute (int *post_order)
             }
           else
             {
-              if (bb != EXIT_BLOCK_PTR && ei_one_before_end_p (ei))
+	      if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
+		  && ei_one_before_end_p (ei))
                 post_order[post_order_num++] = bb->index;
 
               if (!ei_one_before_end_p (ei))
@@ -826,7 +839,8 @@ inverted_post_order_compute (int *post_order)
       /* Detect any infinite loop and activate the kludge.
          Note that this doesn't check EXIT_BLOCK itself
          since EXIT_BLOCK is always added after the outer do-while loop.  */
-      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		      EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
         if (!bitmap_bit_p (visited, bb->index))
           {
             has_unvisited_bb = true;
@@ -859,7 +873,7 @@ inverted_post_order_compute (int *post_order)
         {
           /* No blocks are reachable from EXIT at all.
              Find a dead-end from the ENTRY, and restart the iteration. */
-          basic_block be = dfs_find_deadend (ENTRY_BLOCK_PTR);
+	  basic_block be = dfs_find_deadend (ENTRY_BLOCK_PTR_FOR_FN (cfun));
           gcc_assert (be != NULL);
           bitmap_set_bit (visited, be->index);
           stack[sp++] = ei_start (be->preds);
@@ -923,7 +937,7 @@ pre_and_rev_post_order_compute_fn (struct function *fn,
   bitmap_clear (visited);
 
   /* Push the first edge on to the stack.  */
-  stack[sp++] = ei_start (ENTRY_BLOCK_PTR_FOR_FUNCTION (fn)->succs);
+  stack[sp++] = ei_start (ENTRY_BLOCK_PTR_FOR_FN (fn)->succs);
 
   while (sp)
     {
@@ -937,7 +951,7 @@ pre_and_rev_post_order_compute_fn (struct function *fn,
       dest = ei_edge (ei)->dest;
 
       /* Check if the edge destination has been visited yet.  */
-      if (dest != EXIT_BLOCK_PTR_FOR_FUNCTION (fn)
+      if (dest != EXIT_BLOCK_PTR_FOR_FN (fn)
 	  && ! bitmap_bit_p (visited, dest->index))
 	{
 	  /* Mark that we have visited the destination.  */
@@ -960,7 +974,7 @@ pre_and_rev_post_order_compute_fn (struct function *fn,
       else
 	{
 	  if (ei_one_before_end_p (ei)
-	      && src != ENTRY_BLOCK_PTR_FOR_FUNCTION (fn)
+	      && src != ENTRY_BLOCK_PTR_FOR_FN (fn)
 	      && rev_post_order)
 	    /* There are no more successors for the SRC node
 	       so assign its reverse completion number.  */
@@ -1230,7 +1244,7 @@ compute_dominance_frontiers_1 (bitmap_head *frontiers)
 	    {
 	      basic_block runner = p->src;
 	      basic_block domsb;
-	      if (runner == ENTRY_BLOCK_PTR)
+	      if (runner == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 		continue;
 
 	      domsb = get_immediate_dominator (CDI_DOMINATORS, b);
@@ -1337,7 +1351,7 @@ bitmap_intersection_of_succs (sbitmap dst, sbitmap *src, basic_block b)
   for (e = NULL, ix = 0; ix < EDGE_COUNT (b->succs); ix++)
     {
       e = EDGE_SUCC (b, ix);
-      if (e->dest == EXIT_BLOCK_PTR)
+      if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	continue;
 
       bitmap_copy (dst, src[e->dest->index]);
@@ -1353,7 +1367,7 @@ bitmap_intersection_of_succs (sbitmap dst, sbitmap *src, basic_block b)
 	SBITMAP_ELT_TYPE *p, *r;
 
 	e = EDGE_SUCC (b, ix);
-	if (e->dest == EXIT_BLOCK_PTR)
+	if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	  continue;
 
 	p = src[e->dest->index]->elms;
@@ -1378,7 +1392,7 @@ bitmap_intersection_of_preds (sbitmap dst, sbitmap *src, basic_block b)
   for (e = NULL, ix = 0; ix < EDGE_COUNT (b->preds); ix++)
     {
       e = EDGE_PRED (b, ix);
-      if (e->src == ENTRY_BLOCK_PTR)
+      if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	continue;
 
       bitmap_copy (dst, src[e->src->index]);
@@ -1394,7 +1408,7 @@ bitmap_intersection_of_preds (sbitmap dst, sbitmap *src, basic_block b)
 	SBITMAP_ELT_TYPE *p, *r;
 
 	e = EDGE_PRED (b, ix);
-	if (e->src == ENTRY_BLOCK_PTR)
+	if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	  continue;
 
 	p = src[e->src->index]->elms;
@@ -1419,7 +1433,7 @@ bitmap_union_of_succs (sbitmap dst, sbitmap *src, basic_block b)
   for (ix = 0; ix < EDGE_COUNT (b->succs); ix++)
     {
       e = EDGE_SUCC (b, ix);
-      if (e->dest == EXIT_BLOCK_PTR)
+      if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	continue;
 
       bitmap_copy (dst, src[e->dest->index]);
@@ -1435,7 +1449,7 @@ bitmap_union_of_succs (sbitmap dst, sbitmap *src, basic_block b)
 	SBITMAP_ELT_TYPE *p, *r;
 
 	e = EDGE_SUCC (b, ix);
-	if (e->dest == EXIT_BLOCK_PTR)
+	if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	  continue;
 
 	p = src[e->dest->index]->elms;
@@ -1460,7 +1474,7 @@ bitmap_union_of_preds (sbitmap dst, sbitmap *src, basic_block b)
   for (ix = 0; ix < EDGE_COUNT (b->preds); ix++)
     {
       e = EDGE_PRED (b, ix);
-      if (e->src== ENTRY_BLOCK_PTR)
+      if (e->src== ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	continue;
 
       bitmap_copy (dst, src[e->src->index]);
@@ -1476,7 +1490,7 @@ bitmap_union_of_preds (sbitmap dst, sbitmap *src, basic_block b)
 	SBITMAP_ELT_TYPE *p, *r;
 
 	e = EDGE_PRED (b, ix);
-	if (e->src == ENTRY_BLOCK_PTR)
+	if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	  continue;
 
 	p = src[e->src->index]->elms;
@@ -1504,7 +1518,7 @@ single_pred_before_succ_order (void)
 
   bitmap_clear (visited);
 
-  MARK_VISITED (ENTRY_BLOCK_PTR);
+  MARK_VISITED (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   FOR_EACH_BB (x)
     {
       if (VISITED_P (x))
diff --git a/gcc/cfgbuild.c b/gcc/cfgbuild.c
index a9ed5f1..08534d4 100644
--- a/gcc/cfgbuild.c
+++ b/gcc/cfgbuild.c
@@ -213,8 +213,8 @@ make_edges (basic_block min, basic_block max, int update_p)
 
   /* By nature of the way these get numbered, ENTRY_BLOCK_PTR->next_bb block
      is always the entry.  */
-  if (min == ENTRY_BLOCK_PTR->next_bb)
-    make_edge (ENTRY_BLOCK_PTR, min, EDGE_FALLTHRU);
+  if (min == ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb)
+    make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), min, EDGE_FALLTHRU);
 
   FOR_BB_BETWEEN (bb, min, max->next_bb, next_bb)
     {
@@ -233,14 +233,14 @@ make_edges (basic_block min, basic_block max, int update_p)
 	  if (update_p)
 	    {
 	      FOR_EACH_EDGE (e, ei, bb->succs)
-		if (e->dest != EXIT_BLOCK_PTR)
+		if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 		  bitmap_set_bit (edge_cache, e->dest->index);
 	    }
 	}
 
       if (LABEL_P (BB_HEAD (bb))
 	  && LABEL_ALT_ENTRY_P (BB_HEAD (bb)))
-	cached_make_edge (NULL, ENTRY_BLOCK_PTR, bb, 0);
+	cached_make_edge (NULL, ENTRY_BLOCK_PTR_FOR_FN (cfun), bb, 0);
 
       /* Examine the last instruction of the block, and discover the
 	 ways we can leave the block.  */
@@ -294,7 +294,7 @@ make_edges (basic_block min, basic_block max, int update_p)
 
 	  /* Returns create an exit out.  */
 	  else if (returnjump_p (insn))
-	    cached_make_edge (edge_cache, bb, EXIT_BLOCK_PTR, 0);
+	    cached_make_edge (edge_cache, bb, EXIT_BLOCK_PTR_FOR_FN (cfun), 0);
 
 	  /* Recognize asm goto and do the right thing.  */
 	  else if ((tmp = extract_asm_operands (PATTERN (insn))) != NULL)
@@ -318,7 +318,7 @@ make_edges (basic_block min, basic_block max, int update_p)
 	 worry about EH edges, since we wouldn't have created the sibling call
 	 in the first place.  */
       if (code == CALL_INSN && SIBLING_CALL_P (insn))
-	cached_make_edge (edge_cache, bb, EXIT_BLOCK_PTR,
+	cached_make_edge (edge_cache, bb, EXIT_BLOCK_PTR_FOR_FN (cfun),
 			  EDGE_SIBCALL | EDGE_ABNORMAL);
 
       /* If this is a CALL_INSN, then mark it as reaching the active EH
@@ -359,7 +359,7 @@ make_edges (basic_block min, basic_block max, int update_p)
 
       /* Find out if we can drop through to the next block.  */
       insn = NEXT_INSN (insn);
-      e = find_edge (bb, EXIT_BLOCK_PTR);
+      e = find_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun));
       if (e && e->flags & EDGE_FALLTHRU)
 	insn = NULL;
 
@@ -369,8 +369,9 @@ make_edges (basic_block min, basic_block max, int update_p)
 	insn = NEXT_INSN (insn);
 
       if (!insn)
-	cached_make_edge (edge_cache, bb, EXIT_BLOCK_PTR, EDGE_FALLTHRU);
-      else if (bb->next_bb != EXIT_BLOCK_PTR)
+	cached_make_edge (edge_cache, bb, EXIT_BLOCK_PTR_FOR_FN (cfun),
+			  EDGE_FALLTHRU);
+      else if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  if (insn == BB_HEAD (bb->next_bb))
 	    cached_make_edge (edge_cache, bb, bb->next_bb, EDGE_FALLTHRU);
@@ -480,7 +481,7 @@ find_bb_boundaries (basic_block bb)
 	  remove_edge (fallthru);
 	  flow_transfer_insn = NULL_RTX;
 	  if (code == CODE_LABEL && LABEL_ALT_ENTRY_P (insn))
-	    make_edge (ENTRY_BLOCK_PTR, bb, 0);
+	    make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), bb, 0);
 	}
       else if (code == BARRIER)
 	{
@@ -607,7 +608,7 @@ find_many_sub_basic_blocks (sbitmap blocks)
       break;
 
   min = max = bb;
-  for (; bb != EXIT_BLOCK_PTR; bb = bb->next_bb)
+  for (; bb != EXIT_BLOCK_PTR_FOR_FN (cfun); bb = bb->next_bb)
     if (STATE (bb) != BLOCK_ORIGINAL)
       max = bb;
 
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index a2192cb..9c12610 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -134,7 +134,7 @@ try_simplify_condjump (basic_block cbranch_block)
      unconditional jump.  */
   jump_block = cbranch_fallthru_edge->dest;
   if (!single_pred_p (jump_block)
-      || jump_block->next_bb == EXIT_BLOCK_PTR
+      || jump_block->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
       || !FORWARDER_BLOCK_P (jump_block))
     return false;
   jump_dest_block = single_succ (jump_block);
@@ -157,7 +157,7 @@ try_simplify_condjump (basic_block cbranch_block)
      unconditional branch.  */
   cbranch_dest_block = cbranch_jump_edge->dest;
 
-  if (cbranch_dest_block == EXIT_BLOCK_PTR
+  if (cbranch_dest_block == EXIT_BLOCK_PTR_FOR_FN (cfun)
       || !can_fallthru (jump_block, cbranch_dest_block))
     return false;
 
@@ -455,7 +455,7 @@ try_forward_edges (int mode, basic_block b)
 	 bb-reorder.c:partition_hot_cold_basic_blocks for complete
 	 details.  */
 
-      if (first != EXIT_BLOCK_PTR
+      if (first != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && find_reg_note (BB_END (first), REG_CROSSING_JUMP, NULL_RTX))
 	return changed;
 
@@ -467,7 +467,7 @@ try_forward_edges (int mode, basic_block b)
 
 	  if (FORWARDER_BLOCK_P (target)
 	      && !(single_succ_edge (target)->flags & EDGE_CROSSING)
-	      && single_succ (target) != EXIT_BLOCK_PTR)
+	      && single_succ (target) != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      /* Bypass trivial infinite loops.  */
 	      new_target = single_succ (target);
@@ -580,7 +580,7 @@ try_forward_edges (int mode, basic_block b)
 	  e->goto_locus = goto_locus;
 
 	  /* Don't force if target is exit block.  */
-	  if (threaded && target != EXIT_BLOCK_PTR)
+	  if (threaded && target != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      notice_new_block (redirect_edge_and_branch_force (e, target));
 	      if (dump_file)
@@ -793,7 +793,7 @@ merge_blocks_move (edge e, basic_block b, basic_block c, int mode)
 	fprintf (dump_file, "Merged %d and %d without moving.\n",
 		 b_index, c_index);
 
-      return b->prev_bb == ENTRY_BLOCK_PTR ? b : b->prev_bb;
+      return b->prev_bb == ENTRY_BLOCK_PTR_FOR_FN (cfun) ? b : b->prev_bb;
     }
 
   /* Otherwise we will need to move code around.  Do that only if expensive
@@ -831,7 +831,7 @@ merge_blocks_move (edge e, basic_block b, basic_block c, int mode)
       if (! c_has_outgoing_fallthru)
 	{
 	  merge_blocks_move_successor_nojumps (b, c);
-	  return next == ENTRY_BLOCK_PTR ? next->next_bb : next;
+	  return next == ENTRY_BLOCK_PTR_FOR_FN (cfun) ? next->next_bb : next;
 	}
 
       /* If B does not have an incoming fallthru, then it can be moved
@@ -843,7 +843,7 @@ merge_blocks_move (edge e, basic_block b, basic_block c, int mode)
 	{
 	  basic_block bb;
 
-	  if (b_fallthru_edge->src == ENTRY_BLOCK_PTR)
+	  if (b_fallthru_edge->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    return NULL;
 	  bb = force_nonfallthru (b_fallthru_edge);
 	  if (bb)
@@ -851,7 +851,7 @@ merge_blocks_move (edge e, basic_block b, basic_block c, int mode)
 	}
 
       merge_blocks_move_predecessor_nojumps (b, c);
-      return next == ENTRY_BLOCK_PTR ? next->next_bb : next;
+      return next == ENTRY_BLOCK_PTR_FOR_FN (cfun) ? next->next_bb : next;
     }
 
   return NULL;
@@ -1267,7 +1267,7 @@ walk_to_nondebug_insn (rtx *i1, basic_block *bb1, bool follow_fallthru,
         return;
 
       fallthru = find_fallthru_edge ((*bb1)->preds);
-      if (!fallthru || fallthru->src == ENTRY_BLOCK_PTR_FOR_FUNCTION (cfun)
+      if (!fallthru || fallthru->src == ENTRY_BLOCK_PTR_FOR_FN (cfun)
           || !single_succ_p (fallthru->src))
         return;
 
@@ -1540,7 +1540,8 @@ outgoing_edges_match (int mode, basic_block bb1, basic_block bb2)
      whether they went through the prologue.  Sibcalls are fine, we know
      that we either didn't need or inserted an epilogue before them.  */
   if (crtl->shrink_wrapped
-      && single_succ_p (bb1) && single_succ (bb1) == EXIT_BLOCK_PTR
+      && single_succ_p (bb1)
+      && single_succ (bb1) == EXIT_BLOCK_PTR_FOR_FN (cfun)
       && !JUMP_P (BB_END (bb1))
       && !(CALL_P (BB_END (bb1)) && SIBLING_CALL_P (BB_END (bb1))))
     return false;
@@ -1902,7 +1903,8 @@ try_crossjump_to_edge (int mode, edge e1, edge e2,
     e2 = single_pred_edge (src2), src2 = e2->src;
 
   /* Nothing to do if we reach ENTRY, or a common source block.  */
-  if (src1 == ENTRY_BLOCK_PTR || src2 == ENTRY_BLOCK_PTR)
+  if (src1 == ENTRY_BLOCK_PTR_FOR_FN (cfun) || src2
+      == ENTRY_BLOCK_PTR_FOR_FN (cfun))
     return false;
   if (src1 == src2)
     return false;
@@ -2146,7 +2148,7 @@ try_crossjump_bb (int mode, basic_block bb)
   /* Don't crossjump if this block ends in a computed jump,
      unless we are optimizing for size.  */
   if (optimize_bb_for_size_p (bb)
-      && bb != EXIT_BLOCK_PTR
+      && bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
       && computed_jump_p (BB_END (bb)))
     return false;
 
@@ -2287,7 +2289,7 @@ try_head_merge_bb (basic_block bb)
   /* Don't crossjump if this block ends in a computed jump,
      unless we are optimizing for size.  */
   if (optimize_bb_for_size_p (bb)
-      && bb != EXIT_BLOCK_PTR
+      && bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
       && computed_jump_p (BB_END (bb)))
     return false;
 
@@ -2303,7 +2305,7 @@ try_head_merge_bb (basic_block bb)
     }
 
   for (ix = 0; ix < nedges; ix++)
-    if (EDGE_SUCC (bb, ix)->dest == EXIT_BLOCK_PTR)
+    if (EDGE_SUCC (bb, ix)->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
       return false;
 
   for (ix = 0; ix < nedges; ix++)
@@ -2623,7 +2625,8 @@ try_optimize_cfg (int mode)
 		     "\n\ntry_optimize_cfg iteration %i\n\n",
 		     iterations);
 
-	  for (b = ENTRY_BLOCK_PTR->next_bb; b != EXIT_BLOCK_PTR;)
+	  for (b = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb; b
+	       != EXIT_BLOCK_PTR_FOR_FN (cfun);)
 	    {
 	      basic_block c;
 	      edge s;
@@ -2640,7 +2643,8 @@ try_optimize_cfg (int mode)
 	      if (EDGE_COUNT (b->preds) == 0
 		  || (EDGE_COUNT (b->succs) == 0
 		      && trivially_empty_bb_p (b)
-		      && single_succ_edge (ENTRY_BLOCK_PTR)->dest != b))
+		      && single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun))->dest
+		      != b))
 		{
 		  c = b->prev_bb;
 		  if (EDGE_COUNT (b->preds) > 0)
@@ -2681,7 +2685,7 @@ try_optimize_cfg (int mode)
 		  delete_basic_block (b);
 		  changed = true;
 		  /* Avoid trying to remove ENTRY_BLOCK_PTR.  */
-		  b = (c == ENTRY_BLOCK_PTR ? c->next_bb : c);
+		  b = (c == ENTRY_BLOCK_PTR_FOR_FN (cfun) ? c->next_bb : c);
 		  continue;
 		}
 
@@ -2696,7 +2700,7 @@ try_optimize_cfg (int mode)
 		     if CASE_DROPS_THRU, this can be a tablejump with
 		     some element going to the same place as the
 		     default (fallthru).  */
-		  && (single_pred (b) == ENTRY_BLOCK_PTR
+		  && (single_pred (b) == ENTRY_BLOCK_PTR_FOR_FN (cfun)
 		      || !JUMP_P (BB_END (single_pred (b)))
 		      || ! label_is_jump_target_p (BB_HEAD (b),
 						   BB_END (single_pred (b)))))
@@ -2723,7 +2727,8 @@ try_optimize_cfg (int mode)
 			     "Deleting fallthru block %i.\n",
 			     b->index);
 
-		  c = b->prev_bb == ENTRY_BLOCK_PTR ? b->next_bb : b->prev_bb;
+		  c = ((b->prev_bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
+		       ? b->next_bb : b->prev_bb);
 		  redirect_edge_succ_nodup (single_pred_edge (b),
 					    single_succ (b));
 		  delete_basic_block (b);
@@ -2736,7 +2741,7 @@ try_optimize_cfg (int mode)
 	      if (single_succ_p (b)
 		  && (s = single_succ_edge (b))
 		  && !(s->flags & EDGE_COMPLEX)
-		  && (c = s->dest) != EXIT_BLOCK_PTR
+		  && (c = s->dest) != EXIT_BLOCK_PTR_FOR_FN (cfun)
 		  && single_pred_p (c)
 		  && b != c)
 		{
@@ -2780,7 +2785,7 @@ try_optimize_cfg (int mode)
 		 can either delete the jump entirely, or replace it
 		 with a simple unconditional jump.  */
 	      if (single_succ_p (b)
-		  && single_succ (b) != EXIT_BLOCK_PTR
+		  && single_succ (b) != EXIT_BLOCK_PTR_FOR_FN (cfun)
 		  && onlyjump_p (BB_END (b))
 		  && !find_reg_note (BB_END (b), REG_CROSSING_JUMP, NULL_RTX)
 		  && try_redirect_by_replacing_jump (single_succ_edge (b),
@@ -2819,7 +2824,7 @@ try_optimize_cfg (int mode)
 	    }
 
 	  if ((mode & CLEANUP_CROSSJUMP)
-	      && try_crossjump_bb (mode, EXIT_BLOCK_PTR))
+	      && try_crossjump_bb (mode, EXIT_BLOCK_PTR_FOR_FN (cfun)))
 	    changed = true;
 
 	  if (block_was_dirty)
@@ -2876,7 +2881,8 @@ delete_unreachable_blocks (void)
   if (MAY_HAVE_DEBUG_INSNS && current_ir_type () == IR_GIMPLE
       && dom_info_available_p (CDI_DOMINATORS))
     {
-      for (b = EXIT_BLOCK_PTR->prev_bb; b != ENTRY_BLOCK_PTR; b = prev_bb)
+      for (b = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
+	   b != ENTRY_BLOCK_PTR_FOR_FN (cfun); b = prev_bb)
 	{
 	  prev_bb = b->prev_bb;
 
@@ -2912,7 +2918,8 @@ delete_unreachable_blocks (void)
     }
   else
     {
-      for (b = EXIT_BLOCK_PTR->prev_bb; b != ENTRY_BLOCK_PTR; b = prev_bb)
+      for (b = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
+	   b != ENTRY_BLOCK_PTR_FOR_FN (cfun); b = prev_bb)
 	{
 	  prev_bb = b->prev_bb;
 
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 4ff1a89..d431c8d 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -3363,7 +3363,7 @@ expand_gimple_tailcall (basic_block bb, gimple stmt, bool *can_fallthru)
     {
       if (!(e->flags & (EDGE_ABNORMAL | EDGE_EH)))
 	{
-	  if (e->dest != EXIT_BLOCK_PTR)
+	  if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      e->dest->count -= e->count;
 	      e->dest->frequency -= EDGE_FREQUENCY (e);
@@ -3399,7 +3399,8 @@ expand_gimple_tailcall (basic_block bb, gimple stmt, bool *can_fallthru)
       delete_insn (NEXT_INSN (last));
     }
 
-  e = make_edge (bb, EXIT_BLOCK_PTR, EDGE_ABNORMAL | EDGE_SIBCALL);
+  e = make_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_ABNORMAL
+		 | EDGE_SIBCALL);
   e->probability += probability;
   e->count += count;
   BB_END (bb) = last;
@@ -4840,9 +4841,9 @@ expand_gimple_basic_block (basic_block bb, bool disable_tail_calls)
       gimple ret_stmt = gsi_stmt (gsi);
 
       gcc_assert (single_succ_p (bb));
-      gcc_assert (single_succ (bb) == EXIT_BLOCK_PTR);
+      gcc_assert (single_succ (bb) == EXIT_BLOCK_PTR_FOR_FN (cfun));
 
-      if (bb->next_bb == EXIT_BLOCK_PTR
+      if (bb->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && !gimple_return_retval (ret_stmt))
 	{
 	  gsi_remove (&gsi, false);
@@ -5184,17 +5185,17 @@ construct_init_block (void)
   int flags;
 
   /* Multiple entry points not supported yet.  */
-  gcc_assert (EDGE_COUNT (ENTRY_BLOCK_PTR->succs) == 1);
-  init_rtl_bb_info (ENTRY_BLOCK_PTR);
-  init_rtl_bb_info (EXIT_BLOCK_PTR);
-  ENTRY_BLOCK_PTR->flags |= BB_RTL;
-  EXIT_BLOCK_PTR->flags |= BB_RTL;
+  gcc_assert (EDGE_COUNT (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs) == 1);
+  init_rtl_bb_info (ENTRY_BLOCK_PTR_FOR_FN (cfun));
+  init_rtl_bb_info (EXIT_BLOCK_PTR_FOR_FN (cfun));
+  ENTRY_BLOCK_PTR_FOR_FN (cfun)->flags |= BB_RTL;
+  EXIT_BLOCK_PTR_FOR_FN (cfun)->flags |= BB_RTL;
 
-  e = EDGE_SUCC (ENTRY_BLOCK_PTR, 0);
+  e = EDGE_SUCC (ENTRY_BLOCK_PTR_FOR_FN (cfun), 0);
 
   /* When entry edge points to first basic block, we don't need jump,
      otherwise we have to jump into proper target.  */
-  if (e && e->dest != ENTRY_BLOCK_PTR->next_bb)
+  if (e && e->dest != ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb)
     {
       tree label = gimple_block_label (e->dest);
 
@@ -5206,11 +5207,11 @@ construct_init_block (void)
 
   init_block = create_basic_block (NEXT_INSN (get_insns ()),
 				   get_last_insn (),
-				   ENTRY_BLOCK_PTR);
-  init_block->frequency = ENTRY_BLOCK_PTR->frequency;
-  init_block->count = ENTRY_BLOCK_PTR->count;
-  if (current_loops && ENTRY_BLOCK_PTR->loop_father)
-    add_bb_to_loop (init_block, ENTRY_BLOCK_PTR->loop_father);
+				   ENTRY_BLOCK_PTR_FOR_FN (cfun));
+  init_block->frequency = ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency;
+  init_block->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
+  if (current_loops && ENTRY_BLOCK_PTR_FOR_FN (cfun)->loop_father)
+    add_bb_to_loop (init_block, ENTRY_BLOCK_PTR_FOR_FN (cfun)->loop_father);
   if (e)
     {
       first_block = e->dest;
@@ -5218,9 +5219,9 @@ construct_init_block (void)
       e = make_edge (init_block, first_block, flags);
     }
   else
-    e = make_edge (init_block, EXIT_BLOCK_PTR, EDGE_FALLTHRU);
+    e = make_edge (init_block, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FALLTHRU);
   e->probability = REG_BR_PROB_BASE;
-  e->count = ENTRY_BLOCK_PTR->count;
+  e->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
 
   update_bb_for_insn (init_block);
   return init_block;
@@ -5251,9 +5252,9 @@ construct_exit_block (void)
   edge e, e2;
   unsigned ix;
   edge_iterator ei;
-  rtx orig_end = BB_END (EXIT_BLOCK_PTR->prev_bb);
+  rtx orig_end = BB_END (EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb);
 
-  rtl_profile_for_bb (EXIT_BLOCK_PTR);
+  rtl_profile_for_bb (EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   /* Make sure the locus is set to the end of the function, so that
      epilogue line numbers and warnings are set properly.  */
@@ -5268,30 +5269,30 @@ construct_exit_block (void)
     return;
   /* While emitting the function end we could move end of the last basic block.
    */
-  BB_END (EXIT_BLOCK_PTR->prev_bb) = orig_end;
+  BB_END (EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb) = orig_end;
   while (NEXT_INSN (head) && NOTE_P (NEXT_INSN (head)))
     head = NEXT_INSN (head);
   exit_block = create_basic_block (NEXT_INSN (head), end,
-				   EXIT_BLOCK_PTR->prev_bb);
-  exit_block->frequency = EXIT_BLOCK_PTR->frequency;
-  exit_block->count = EXIT_BLOCK_PTR->count;
-  if (current_loops && EXIT_BLOCK_PTR->loop_father)
-    add_bb_to_loop (exit_block, EXIT_BLOCK_PTR->loop_father);
+				   EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb);
+  exit_block->frequency = EXIT_BLOCK_PTR_FOR_FN (cfun)->frequency;
+  exit_block->count = EXIT_BLOCK_PTR_FOR_FN (cfun)->count;
+  if (current_loops && EXIT_BLOCK_PTR_FOR_FN (cfun)->loop_father)
+    add_bb_to_loop (exit_block, EXIT_BLOCK_PTR_FOR_FN (cfun)->loop_father);
 
   ix = 0;
-  while (ix < EDGE_COUNT (EXIT_BLOCK_PTR->preds))
+  while (ix < EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds))
     {
-      e = EDGE_PRED (EXIT_BLOCK_PTR, ix);
+      e = EDGE_PRED (EXIT_BLOCK_PTR_FOR_FN (cfun), ix);
       if (!(e->flags & EDGE_ABNORMAL))
 	redirect_edge_succ (e, exit_block);
       else
 	ix++;
     }
 
-  e = make_edge (exit_block, EXIT_BLOCK_PTR, EDGE_FALLTHRU);
+  e = make_edge (exit_block, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FALLTHRU);
   e->probability = REG_BR_PROB_BASE;
-  e->count = EXIT_BLOCK_PTR->count;
-  FOR_EACH_EDGE (e2, ei, EXIT_BLOCK_PTR->preds)
+  e->count = EXIT_BLOCK_PTR_FOR_FN (cfun)->count;
+  FOR_EACH_EDGE (e2, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     if (e2 != e)
       {
 	e->count -= e2->count;
@@ -5521,7 +5522,7 @@ gimple_expand_cfg (void)
   /* Dominators are not kept up-to-date as we may create new basic-blocks.  */
   free_dominance_info (CDI_DOMINATORS);
 
-  rtl_profile_for_bb (ENTRY_BLOCK_PTR);
+  rtl_profile_for_bb (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
   insn_locations_init ();
   if (!DECL_IS_BUILTIN (current_function_decl))
@@ -5685,11 +5686,12 @@ gimple_expand_cfg (void)
 
   /* Clear EDGE_EXECUTABLE on the entry edge(s).  It is cleaned from the
      remaining edges later.  */
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     e->flags &= ~EDGE_EXECUTABLE;
 
   lab_rtx_for_bb = pointer_map_create ();
-  FOR_BB_BETWEEN (bb, init_block->next_bb, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, init_block->next_bb, EXIT_BLOCK_PTR_FOR_FN (cfun),
+		  next_bb)
     bb = expand_gimple_basic_block (bb, var_ret_seq != NULL_RTX);
 
   if (MAY_HAVE_DEBUG_INSNS)
@@ -5734,7 +5736,8 @@ gimple_expand_cfg (void)
      split edges which edge insertions might do.  */
   rebuild_jump_labels (get_insns ());
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       edge e;
       edge_iterator ei;
@@ -5745,8 +5748,8 @@ gimple_expand_cfg (void)
 	      rebuild_jump_labels_chain (e->insns.r);
 	      /* Put insns after parm birth, but before
 		 NOTE_INSNS_FUNCTION_BEG.  */
-	      if (e->src == ENTRY_BLOCK_PTR
-		  && single_succ_p (ENTRY_BLOCK_PTR))
+	      if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun)
+		  && single_succ_p (ENTRY_BLOCK_PTR_FOR_FN (cfun)))
 		{
 		  rtx insns = e->insns.r;
 		  e->insns.r = NULL_RTX;
@@ -5767,7 +5770,8 @@ gimple_expand_cfg (void)
   /* We're done expanding trees to RTL.  */
   currently_expanding_to_rtl = 0;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb,
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       edge e;
       edge_iterator ei;
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index 20b90bf..2535c90 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -102,10 +102,10 @@ verify_flow_info (void)
   edge_checksum = XCNEWVEC (size_t, last_basic_block);
 
   /* Check bb chain & numbers.  */
-  last_bb_seen = ENTRY_BLOCK_PTR;
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb, NULL, next_bb)
+  last_bb_seen = ENTRY_BLOCK_PTR_FOR_FN (cfun);
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb, NULL, next_bb)
     {
-      if (bb != EXIT_BLOCK_PTR
+      if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && bb != BASIC_BLOCK (bb->index))
 	{
 	  error ("bb %d on wrong place", bb->index);
@@ -234,21 +234,21 @@ verify_flow_info (void)
     edge e;
     edge_iterator ei;
 
-    FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+    FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
       edge_checksum[e->dest->index] += (size_t) e;
 
-    FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+    FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
       edge_checksum[e->dest->index] -= (size_t) e;
   }
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     if (edge_checksum[bb->index])
       {
 	error ("basic block %i edge lists are corrupted", bb->index);
 	err = 1;
       }
 
-  last_bb_seen = ENTRY_BLOCK_PTR;
+  last_bb_seen = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   /* Clean up.  */
   free (last_visited);
@@ -938,10 +938,11 @@ tidy_fallthru_edges (void)
   if (!cfg_hooks->tidy_fallthru_edge)
     return;
 
-  if (ENTRY_BLOCK_PTR->next_bb == EXIT_BLOCK_PTR)
+  if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return;
 
-  FOR_BB_BETWEEN (b, ENTRY_BLOCK_PTR->next_bb, EXIT_BLOCK_PTR->prev_bb, next_bb)
+  FOR_BB_BETWEEN (b, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb,
+		  EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb, next_bb)
     {
       edge s;
 
@@ -1011,7 +1012,7 @@ can_duplicate_block_p (const_basic_block bb)
     internal_error ("%s does not support can_duplicate_block_p",
 		    cfg_hooks->name);
 
-  if (bb == EXIT_BLOCK_PTR || bb == ENTRY_BLOCK_PTR)
+  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
     return false;
 
   return cfg_hooks->can_duplicate_block_p (bb);
@@ -1409,7 +1410,7 @@ account_profile_record (struct profile_record *record, int after_pass)
 
   FOR_ALL_BB (bb)
    {
-      if (bb != EXIT_BLOCK_PTR_FOR_FUNCTION (cfun)
+      if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && profile_status != PROFILE_ABSENT)
 	{
 	  sum = 0;
@@ -1424,7 +1425,7 @@ account_profile_record (struct profile_record *record, int after_pass)
 	      && (lsum - bb->count > 100 || lsum - bb->count < -100))
 	    record->num_mismatched_count_out[after_pass]++;
 	}
-      if (bb != ENTRY_BLOCK_PTR_FOR_FUNCTION (cfun)
+      if (bb != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	  && profile_status != PROFILE_ABSENT)
 	{
 	  sum = 0;
@@ -1440,8 +1441,8 @@ account_profile_record (struct profile_record *record, int after_pass)
 	  if (lsum - bb->count > 100 || lsum - bb->count < -100)
 	    record->num_mismatched_count_in[after_pass]++;
 	}
-      if (bb == ENTRY_BLOCK_PTR_FOR_FUNCTION (cfun)
-	  || bb == EXIT_BLOCK_PTR_FOR_FUNCTION (cfun))
+      if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	  || bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	continue;
       gcc_assert (cfg_hooks->account_profile_record);
       cfg_hooks->account_profile_record (bb, after_pass, record);
diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
index a5eb4da..4b3ad5b 100644
--- a/gcc/cfgloop.c
+++ b/gcc/cfgloop.c
@@ -352,10 +352,10 @@ init_loops_structure (struct function *fn,
   /* Dummy loop containing whole function.  */
   root = alloc_loop ();
   root->num_nodes = n_basic_blocks_for_fn (fn);
-  root->latch = EXIT_BLOCK_PTR_FOR_FUNCTION (fn);
-  root->header = ENTRY_BLOCK_PTR_FOR_FUNCTION (fn);
-  ENTRY_BLOCK_PTR_FOR_FUNCTION (fn)->loop_father = root;
-  EXIT_BLOCK_PTR_FOR_FUNCTION (fn)->loop_father = root;
+  root->latch = EXIT_BLOCK_PTR_FOR_FN (fn);
+  root->header = ENTRY_BLOCK_PTR_FOR_FN (fn);
+  ENTRY_BLOCK_PTR_FOR_FN (fn)->loop_father = root;
+  EXIT_BLOCK_PTR_FOR_FN (fn)->loop_father = root;
 
   loops->larray->quick_push (root);
   loops->tree_root = root;
@@ -382,7 +382,7 @@ bb_loop_header_p (basic_block header)
   FOR_EACH_EDGE (e, ei, header->preds)
     {
       basic_block latch = e->src;
-      if (latch != ENTRY_BLOCK_PTR
+      if (latch != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	  && dominated_by_p (CDI_DOMINATORS, latch, header))
 	return true;
     }
@@ -745,7 +745,7 @@ disambiguate_multiple_latches (struct loop *loop)
      block.  This would cause problems if the entry edge was the one from the
      entry block.  To avoid having to handle this case specially, split
      such entry edge.  */
-  e = find_edge (ENTRY_BLOCK_PTR, loop->header);
+  e = find_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), loop->header);
   if (e)
     split_edge (e);
 
@@ -781,7 +781,8 @@ flow_bb_inside_loop_p (const struct loop *loop, const_basic_block bb)
 {
   struct loop *source_loop;
 
-  if (bb == ENTRY_BLOCK_PTR || bb == EXIT_BLOCK_PTR)
+  if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)
+      || bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return 0;
 
   source_loop = bb->loop_father;
@@ -826,13 +827,13 @@ get_loop_body (const struct loop *loop)
 
   body = XNEWVEC (basic_block, loop->num_nodes);
 
-  if (loop->latch == EXIT_BLOCK_PTR)
+  if (loop->latch == EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       /* There may be blocks unreachable from EXIT_BLOCK, hence we need to
 	 special-case the fake loop that contains the whole function.  */
       gcc_assert (loop->num_nodes == (unsigned) n_basic_blocks_for_fn (cfun));
       body[tv++] = loop->header;
-      body[tv++] = EXIT_BLOCK_PTR;
+      body[tv++] = EXIT_BLOCK_PTR_FOR_FN (cfun);
       FOR_EACH_BB (bb)
 	body[tv++] = bb;
     }
@@ -886,7 +887,7 @@ get_loop_body_in_dom_order (const struct loop *loop)
 
   tovisit = XNEWVEC (basic_block, loop->num_nodes);
 
-  gcc_assert (loop->latch != EXIT_BLOCK_PTR);
+  gcc_assert (loop->latch != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   tv = 0;
   fill_sons_in_loop (loop, loop->header, tovisit, &tv);
@@ -921,7 +922,7 @@ get_loop_body_in_bfs_order (const struct loop *loop)
   unsigned int vc = 1;
 
   gcc_assert (loop->num_nodes);
-  gcc_assert (loop->latch != EXIT_BLOCK_PTR);
+  gcc_assert (loop->latch != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   blocks = XNEWVEC (basic_block, loop->num_nodes);
   visited = BITMAP_ALLOC (NULL);
@@ -1143,7 +1144,7 @@ get_loop_exit_edges (const struct loop *loop)
   edge_iterator ei;
   struct loop_exit *exit;
 
-  gcc_assert (loop->latch != EXIT_BLOCK_PTR);
+  gcc_assert (loop->latch != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   /* If we maintain the lists of exits, use them.  Otherwise we must
      scan the body of the loop.  */
@@ -1175,7 +1176,7 @@ num_loop_branches (const struct loop *loop)
   unsigned i, n;
   basic_block * body;
 
-  gcc_assert (loop->latch != EXIT_BLOCK_PTR);
+  gcc_assert (loop->latch != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   body = get_loop_body (loop);
   n = 0;
diff --git a/gcc/cfgloopanal.c b/gcc/cfgloopanal.c
index 9300237..0cee6c6 100644
--- a/gcc/cfgloopanal.c
+++ b/gcc/cfgloopanal.c
@@ -85,7 +85,8 @@ mark_irreducible_loops (void)
   gcc_assert (current_loops != NULL);
 
   /* Reset the flags.  */
-  FOR_BB_BETWEEN (act, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (act, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       act->flags &= ~BB_IRREDUCIBLE_LOOP;
       FOR_EACH_EDGE (e, ei, act->succs)
@@ -95,11 +96,12 @@ mark_irreducible_loops (void)
   /* Create the edge lists.  */
   g = new_graph (last_basic_block + num);
 
-  FOR_BB_BETWEEN (act, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (act, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     FOR_EACH_EDGE (e, ei, act->succs)
       {
 	/* Ignore edges to exit.  */
-	if (e->dest == EXIT_BLOCK_PTR)
+	if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	  continue;
 
 	src = BB_REPR (act);
diff --git a/gcc/cfgloopmanip.c b/gcc/cfgloopmanip.c
index 714c7e1..6baa15a 100644
--- a/gcc/cfgloopmanip.c
+++ b/gcc/cfgloopmanip.c
@@ -92,7 +92,7 @@ fix_bb_placement (basic_block bb)
 
   FOR_EACH_EDGE (e, ei, bb->succs)
     {
-      if (e->dest == EXIT_BLOCK_PTR)
+      if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	continue;
 
       act = e->dest->loop_father;
@@ -352,7 +352,8 @@ remove_path (edge e)
     bitmap_set_bit (seen, rem_bbs[i]->index);
   if (!irred_invalidated)
     FOR_EACH_EDGE (ae, ei, e->src->succs)
-      if (ae != e && ae->dest != EXIT_BLOCK_PTR && !bitmap_bit_p (seen, ae->dest->index)
+      if (ae != e && ae->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
+	  && !bitmap_bit_p (seen, ae->dest->index)
 	  && ae->flags & EDGE_IRREDUCIBLE_LOOP)
 	{
 	  irred_invalidated = true;
@@ -363,7 +364,8 @@ remove_path (edge e)
     {
       bb = rem_bbs[i];
       FOR_EACH_EDGE (ae, ei, rem_bbs[i]->succs)
-	if (ae->dest != EXIT_BLOCK_PTR && !bitmap_bit_p (seen, ae->dest->index))
+	if (ae->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
+	    && !bitmap_bit_p (seen, ae->dest->index))
 	  {
 	    bitmap_set_bit (seen, ae->dest->index);
 	    bord_bbs[n_bord_bbs++] = ae->dest;
@@ -1519,7 +1521,7 @@ create_preheader (struct loop *loop, int flags)
 
       /* We do not allow entry block to be the loop preheader, since we
 	     cannot emit code there.  */
-      if (single_entry->src == ENTRY_BLOCK_PTR)
+      if (single_entry->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
         need_forwarder_block = true;
       else
         {
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index c81d3a5..7ad3872 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -501,7 +501,7 @@ rtx
 entry_of_function (void)
 {
   return (n_basic_blocks_for_fn (cfun) > NUM_FIXED_BLOCKS ?
-	  BB_HEAD (ENTRY_BLOCK_PTR->next_bb) : get_insns ());
+	  BB_HEAD (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb) : get_insns ());
 }
 
 /* Emit INSN at the entry point of the function, ensuring that it is only
@@ -509,7 +509,7 @@ entry_of_function (void)
 void
 emit_insn_at_entry (rtx insn)
 {
-  edge_iterator ei = ei_start (ENTRY_BLOCK_PTR->succs);
+  edge_iterator ei = ei_start (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs);
   edge e = ei_safe_edge (ei);
   gcc_assert (e->flags & EDGE_FALLTHRU);
 
@@ -573,7 +573,7 @@ contains_no_active_insn_p (const_basic_block bb)
 {
   rtx insn;
 
-  if (bb == EXIT_BLOCK_PTR || bb == ENTRY_BLOCK_PTR
+  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)
       || !single_succ_p (bb))
     return false;
 
@@ -620,7 +620,7 @@ can_fallthru (basic_block src, basic_block target)
   edge e;
   edge_iterator ei;
 
-  if (target == EXIT_BLOCK_PTR)
+  if (target == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return true;
   if (src->next_bb != target)
     return false;
@@ -630,7 +630,7 @@ can_fallthru (basic_block src, basic_block target)
     return false;
 
   FOR_EACH_EDGE (e, ei, src->succs)
-    if (e->dest == EXIT_BLOCK_PTR
+    if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	&& e->flags & EDGE_FALLTHRU)
       return false;
 
@@ -650,10 +650,10 @@ could_fall_through (basic_block src, basic_block target)
   edge e;
   edge_iterator ei;
 
-  if (target == EXIT_BLOCK_PTR)
+  if (target == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return true;
   FOR_EACH_EDGE (e, ei, src->succs)
-    if (e->dest == EXIT_BLOCK_PTR
+    if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	&& e->flags & EDGE_FALLTHRU)
       return 0;
   return true;
@@ -958,7 +958,8 @@ rtl_can_merge_blocks (basic_block a, basic_block b)
 	  /* Must be simple edge.  */
 	  && !(single_succ_edge (a)->flags & EDGE_COMPLEX)
 	  && a->next_bb == b
-	  && a != ENTRY_BLOCK_PTR && b != EXIT_BLOCK_PTR
+	  && a != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	  && b != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  /* If the jump insn has side effects,
 	     we can't kill the edge.  */
 	  && (!JUMP_P (BB_END (a))
@@ -972,7 +973,7 @@ rtl_can_merge_blocks (basic_block a, basic_block b)
 rtx
 block_label (basic_block block)
 {
-  if (block == EXIT_BLOCK_PTR)
+  if (block == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return NULL_RTX;
 
   if (!LABEL_P (BB_HEAD (block)))
@@ -1084,13 +1085,13 @@ try_redirect_by_replacing_jump (edge e, basic_block target, bool in_cfglayout)
 		 INSN_UID (insn), e->dest->index, target->index);
       if (!redirect_jump (insn, block_label (target), 0))
 	{
-	  gcc_assert (target == EXIT_BLOCK_PTR);
+	  gcc_assert (target == EXIT_BLOCK_PTR_FOR_FN (cfun));
 	  return NULL;
 	}
     }
 
   /* Cannot do anything for target exit block.  */
-  else if (target == EXIT_BLOCK_PTR)
+  else if (target == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return NULL;
 
   /* Or replace possibly complicated jump insn by simple jump insn.  */
@@ -1178,7 +1179,7 @@ patch_jump_insn (rtx insn, rtx old_label, basic_block new_bb)
       int j;
       rtx new_label = block_label (new_bb);
 
-      if (new_bb == EXIT_BLOCK_PTR)
+      if (new_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	return false;
       if (GET_CODE (PATTERN (tmp)) == ADDR_VEC)
 	vec = XVEC (PATTERN (tmp), 0);
@@ -1211,7 +1212,7 @@ patch_jump_insn (rtx insn, rtx old_label, basic_block new_bb)
       int i, n = ASM_OPERANDS_LABEL_LENGTH (tmp);
       rtx new_label, note;
 
-      if (new_bb == EXIT_BLOCK_PTR)
+      if (new_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	return false;
       new_label = block_label (new_bb);
 
@@ -1268,7 +1269,7 @@ patch_jump_insn (rtx insn, rtx old_label, basic_block new_bb)
 	     target is exit block on some arches.  */
 	  if (!redirect_jump (insn, block_label (new_bb), 0))
 	    {
-	      gcc_assert (new_bb == EXIT_BLOCK_PTR);
+	      gcc_assert (new_bb == EXIT_BLOCK_PTR_FOR_FN (cfun));
 	      return false;
 	    }
 	}
@@ -1324,7 +1325,8 @@ fixup_partition_crossing (edge e)
 {
   rtx note;
 
-  if (e->src == ENTRY_BLOCK_PTR || e->dest == EXIT_BLOCK_PTR)
+  if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun) || e->dest
+      == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return;
   /* If we redirected an existing edge, it may already be marked
      crossing, even though the new src is missing a reg crossing note.
@@ -1392,7 +1394,7 @@ fixup_new_cold_bb (basic_block bb)
          boundary fixup by calling fixup_partition_crossing itself.  */
       if ((e->flags & EDGE_FALLTHRU)
           && BB_PARTITION (bb) != BB_PARTITION (e->dest)
-          && e->dest != EXIT_BLOCK_PTR)
+	  && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
         force_nonfallthru (e);
       else
         fixup_partition_crossing (e);
@@ -1470,7 +1472,8 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
   /* In the case the last instruction is conditional jump to the next
      instruction, first redirect the jump itself and then continue
      by creating a basic block afterwards to redirect fallthru edge.  */
-  if (e->src != ENTRY_BLOCK_PTR && e->dest != EXIT_BLOCK_PTR
+  if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+      && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
       && any_condjump_p (BB_END (e->src))
       && JUMP_LABEL (BB_END (e->src)) == BB_HEAD (e->dest))
     {
@@ -1512,7 +1515,7 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
   else
     {
       gcc_assert (e->flags & EDGE_FALLTHRU);
-      if (e->src == ENTRY_BLOCK_PTR)
+      if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  /* We can't redirect the entry block.  Create an empty block
 	     at the start of the function which we use to add the new
@@ -1521,16 +1524,18 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
 	  edge_iterator ei;
 	  bool found = false;
 
-	  basic_block bb = create_basic_block (BB_HEAD (e->dest), NULL, ENTRY_BLOCK_PTR);
+	  basic_block bb = create_basic_block (BB_HEAD (e->dest), NULL,
+					       ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
 	  /* Change the existing edge's source to be the new block, and add
 	     a new edge from the entry block to the new block.  */
 	  e->src = bb;
-	  for (ei = ei_start (ENTRY_BLOCK_PTR->succs); (tmp = ei_safe_edge (ei)); )
+	  for (ei = ei_start (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs);
+	       (tmp = ei_safe_edge (ei)); )
 	    {
 	      if (tmp == e)
 		{
-		  ENTRY_BLOCK_PTR->succs->unordered_remove (ei.index);
+		  ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs->unordered_remove (ei.index);
 		  found = true;
 		  break;
 		}
@@ -1541,14 +1546,15 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
 	  gcc_assert (found);
 
 	  vec_safe_push (bb->succs, e);
-	  make_single_succ_edge (ENTRY_BLOCK_PTR, bb, EDGE_FALLTHRU);
+	  make_single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), bb,
+				 EDGE_FALLTHRU);
 	}
     }
 
   /* If e->src ends with asm goto, see if any of the ASM_OPERANDS_LABELs
      don't point to the target or fallthru label.  */
   if (JUMP_P (BB_END (e->src))
-      && target != EXIT_BLOCK_PTR
+      && target != EXIT_BLOCK_PTR_FOR_FN (cfun)
       && (e->flags & EDGE_FALLTHRU)
       && (note = extract_asm_operands (PATTERN (BB_END (e->src)))))
     {
@@ -1650,7 +1656,7 @@ force_nonfallthru_and_redirect (edge e, basic_block target, rtx jump_label)
 
   loc = e->goto_locus;
   e->flags &= ~EDGE_FALLTHRU;
-  if (target == EXIT_BLOCK_PTR)
+  if (target == EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       if (jump_label == ret_rtx)
 	{
@@ -1784,7 +1790,7 @@ static basic_block
 last_bb_in_partition (basic_block start_bb)
 {
   basic_block bb;
-  FOR_BB_BETWEEN (bb, start_bb, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, start_bb, EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       if (BB_PARTITION (start_bb) != BB_PARTITION (bb->next_bb))
         return bb;
@@ -1820,14 +1826,15 @@ rtl_split_edge (edge edge_in)
     }
 
   /* Create the basic block note.  */
-  if (edge_in->dest != EXIT_BLOCK_PTR)
+  if (edge_in->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
     before = BB_HEAD (edge_in->dest);
   else
     before = NULL_RTX;
 
   /* If this is a fall through edge to the exit block, the blocks might be
      not adjacent, and the right place is after the source.  */
-  if ((edge_in->flags & EDGE_FALLTHRU) && edge_in->dest == EXIT_BLOCK_PTR)
+  if ((edge_in->flags & EDGE_FALLTHRU)
+      && edge_in->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       before = NEXT_INSN (BB_END (edge_in->src));
       bb = create_basic_block (before, NULL, edge_in->src);
@@ -1835,7 +1842,7 @@ rtl_split_edge (edge edge_in)
     }
   else
     {
-      if (edge_in->src == ENTRY_BLOCK_PTR)
+      if (edge_in->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
         {
           bb = create_basic_block (before, NULL, edge_in->dest->prev_bb);
           BB_COPY_PARTITION (bb, edge_in->dest);
@@ -1873,7 +1880,7 @@ rtl_split_edge (edge edge_in)
 
   /* Can't allow a region crossing edge to be fallthrough.  */
   if (BB_PARTITION (bb) != BB_PARTITION (edge_in->dest)
-      && edge_in->dest != EXIT_BLOCK_PTR)
+      && edge_in->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       new_bb = force_nonfallthru (single_succ_edge (bb));
       gcc_assert (!new_bb);
@@ -1888,7 +1895,7 @@ rtl_split_edge (edge edge_in)
     }
   else
     {
-      if (edge_in->src != ENTRY_BLOCK_PTR)
+      if (edge_in->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  /* For asm goto even splitting of fallthru edge might
 	     need insn patching, as other labels might point to the
@@ -1896,7 +1903,7 @@ rtl_split_edge (edge edge_in)
 	  rtx last = BB_END (edge_in->src);
 	  if (last
 	      && JUMP_P (last)
-	      && edge_in->dest != EXIT_BLOCK_PTR
+	      && edge_in->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	      && extract_asm_operands (PATTERN (last)) != NULL_RTX
 	      && patch_jump_insn (last, before, bb))
 	    df_set_bb_dirty (edge_in->src);
@@ -1943,7 +1950,7 @@ commit_one_edge_insertion (edge e)
 
   /* Figure out where to put these insns.  If the destination has
      one predecessor, insert there.  Except for the exit block.  */
-  if (single_pred_p (e->dest) && e->dest != EXIT_BLOCK_PTR)
+  if (single_pred_p (e->dest) && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       bb = e->dest;
 
@@ -1972,7 +1979,7 @@ commit_one_edge_insertion (edge e)
      the basic block.  */
   else if ((e->flags & EDGE_ABNORMAL) == 0
 	   && single_succ_p (e->src)
-	   && e->src != ENTRY_BLOCK_PTR
+	   && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	   && (!JUMP_P (BB_END (e->src))
 	       || simplejump_p (BB_END (e->src))))
     {
@@ -2025,7 +2032,7 @@ commit_one_edge_insertion (edge e)
 	 to EXIT.  */
 
       e = single_succ_edge (bb);
-      gcc_assert (e->dest == EXIT_BLOCK_PTR
+      gcc_assert (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 		  && single_succ_p (bb) && (e->flags & EDGE_FALLTHRU));
 
       e->flags &= ~EDGE_FALLTHRU;
@@ -2057,7 +2064,8 @@ commit_edge_insertions (void)
   verify_flow_info ();
 #endif
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       edge e;
       edge_iterator ei;
@@ -2428,8 +2436,8 @@ rtl_verify_edges (void)
 	    n_fallthru++, fallthru = e;
 
 	  is_crossing = (BB_PARTITION (e->src) != BB_PARTITION (e->dest)
-			 && e->src != ENTRY_BLOCK_PTR
-			 && e->dest != EXIT_BLOCK_PTR);
+			 && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+			 && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun));
           has_crossing_edge |= is_crossing;
 	  if (e->flags & EDGE_CROSSING)
 	    {
@@ -2832,8 +2840,8 @@ rtl_verify_fallthru (void)
 		break;
 	    }
 	}
-      else if (e->src != ENTRY_BLOCK_PTR
-	       && e->dest != EXIT_BLOCK_PTR)
+      else if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	       && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  rtx insn;
 
@@ -2872,10 +2880,10 @@ rtl_verify_bb_layout (void)
   rtx x;
   int num_bb_notes;
   const rtx rtx_first = get_insns ();
-  basic_block last_bb_seen = ENTRY_BLOCK_PTR, curr_bb = NULL;
+  basic_block last_bb_seen = ENTRY_BLOCK_PTR_FOR_FN (cfun), curr_bb = NULL;
 
   num_bb_notes = 0;
-  last_bb_seen = ENTRY_BLOCK_PTR;
+  last_bb_seen = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   for (x = rtx_first; x; x = NEXT_INSN (x))
     {
@@ -3062,7 +3070,7 @@ purge_dead_edges (basic_block bb)
 	      ei_next (&ei);
 	      continue;
 	    }
-	  else if (e->dest != EXIT_BLOCK_PTR
+	  else if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 		   && BB_HEAD (e->dest) == JUMP_LABEL (insn))
 	    /* If the destination block is the target of the jump,
 	       keep the edge.  */
@@ -3070,7 +3078,8 @@ purge_dead_edges (basic_block bb)
 	      ei_next (&ei);
 	      continue;
 	    }
-	  else if (e->dest == EXIT_BLOCK_PTR && returnjump_p (insn))
+	  else if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
+		   && returnjump_p (insn))
 	    /* If the destination block is the exit block, and this
 	       instruction is a return, then keep the edge.  */
 	    {
@@ -3319,7 +3328,7 @@ skip_insns_after_block (basic_block bb)
   rtx insn, last_insn, next_head, prev;
 
   next_head = NULL_RTX;
-  if (bb->next_bb != EXIT_BLOCK_PTR)
+  if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     next_head = BB_HEAD (bb->next_bb);
 
   for (last_insn = insn = BB_END (bb); (insn = NEXT_INSN (insn)) != 0; )
@@ -3468,7 +3477,7 @@ outof_cfg_layout_mode (void)
   basic_block bb;
 
   FOR_EACH_BB (bb)
-    if (bb->next_bb != EXIT_BLOCK_PTR)
+    if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
       bb->aux = bb->next_bb;
 
   cfg_layout_finalize ();
@@ -3577,7 +3586,8 @@ relink_block_chain (bool stay_in_cfglayout_mode)
   if (dump_file)
     {
       fprintf (dump_file, "Reordered sequence:\n");
-      for (bb = ENTRY_BLOCK_PTR->next_bb, index = NUM_FIXED_BLOCKS;
+      for (bb = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb, index =
+	   NUM_FIXED_BLOCKS;
 	   bb;
 	   bb = (basic_block) bb->aux, index++)
 	{
@@ -3595,15 +3605,15 @@ relink_block_chain (bool stay_in_cfglayout_mode)
     }
 
   /* Now reorder the blocks.  */
-  prev_bb = ENTRY_BLOCK_PTR;
-  bb = ENTRY_BLOCK_PTR->next_bb;
+  prev_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
+  bb = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb;
   for (; bb; prev_bb = bb, bb = (basic_block) bb->aux)
     {
       bb->prev_bb = prev_bb;
       prev_bb->next_bb = bb;
     }
-  prev_bb->next_bb = EXIT_BLOCK_PTR;
-  EXIT_BLOCK_PTR->prev_bb = prev_bb;
+  prev_bb->next_bb = EXIT_BLOCK_PTR_FOR_FN (cfun);
+  EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb = prev_bb;
 
   /* Then, clean up the aux fields.  */
   FOR_ALL_BB (bb)
@@ -3644,7 +3654,8 @@ fixup_reorder_chain (void)
   /* First do the bulk reordering -- rechain the blocks without regard to
      the needed changes to jumps and labels.  */
 
-  for (bb = ENTRY_BLOCK_PTR->next_bb; bb; bb = (basic_block) bb->aux)
+  for (bb = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb; bb; bb = (basic_block)
+       bb->aux)
     {
       if (BB_HEADER (bb))
 	{
@@ -3687,7 +3698,8 @@ fixup_reorder_chain (void)
   /* Now add jumps and labels as needed to match the blocks new
      outgoing edges.  */
 
-  for (bb = ENTRY_BLOCK_PTR->next_bb; bb ; bb = (basic_block) bb->aux)
+  for (bb = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb; bb ; bb = (basic_block)
+       bb->aux)
     {
       edge e_fall, e_taken, e;
       rtx bb_end_insn;
@@ -3728,7 +3740,7 @@ fixup_reorder_chain (void)
 
 	      /* If the old fallthru is still next, nothing to do.  */
 	      if (bb->aux == e_fall->dest
-		  || e_fall->dest == EXIT_BLOCK_PTR)
+		  || e_fall->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 		continue;
 
 	      /* The degenerated case of conditional jump jumping to the next
@@ -3749,7 +3761,8 @@ fixup_reorder_chain (void)
 		  if (note
 		      && XINT (note, 0) < REG_BR_PROB_BASE / 2
 		      && invert_jump (bb_end_insn,
-				      (e_fall->dest == EXIT_BLOCK_PTR
+				      (e_fall->dest
+				       == EXIT_BLOCK_PTR_FOR_FN (cfun)
 				       ? NULL_RTX
 				       : label_for_bb (e_fall->dest)), 0))
 		    {
@@ -3771,7 +3784,8 @@ fixup_reorder_chain (void)
 	      /* Otherwise we can try to invert the jump.  This will
 		 basically never fail, however, keep up the pretense.  */
 	      else if (invert_jump (bb_end_insn,
-				    (e_fall->dest == EXIT_BLOCK_PTR
+				    (e_fall->dest
+				     == EXIT_BLOCK_PTR_FOR_FN (cfun)
 				     ? NULL_RTX
 				     : label_for_bb (e_fall->dest)), 0))
 		{
@@ -3793,7 +3807,7 @@ fixup_reorder_chain (void)
 		 __builtin_unreachable ()), nothing to do.  */
 	      if (! e_fall
 		  || bb->aux == e_fall->dest
-		  || e_fall->dest == EXIT_BLOCK_PTR)
+		  || e_fall->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 		continue;
 
 	      /* Otherwise we'll have to use the fallthru fixup below.  */
@@ -3820,7 +3834,7 @@ fixup_reorder_chain (void)
 	    continue;
 
 	  /* A fallthru to exit block.  */
-	  if (e_fall->dest == EXIT_BLOCK_PTR)
+	  if (e_fall->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    continue;
 	}
 
@@ -3880,7 +3894,7 @@ fixup_reorder_chain (void)
 		  continue;
 		}
 	      dest = e->dest;
-	      if (dest == EXIT_BLOCK_PTR)
+	      if (dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 		{
 		  /* Non-fallthru edges to the exit block cannot be split.  */
 		  if (!(e->flags & EDGE_FALLTHRU))
@@ -3958,13 +3972,13 @@ fixup_fallthru_exit_predecessor (void)
      value.  */
   gcc_assert (reload_completed);
 
-  e = find_fallthru_edge (EXIT_BLOCK_PTR->preds);
+  e = find_fallthru_edge (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds);
   if (e)
     bb = e->src;
 
   if (bb && bb->aux)
     {
-      basic_block c = ENTRY_BLOCK_PTR->next_bb;
+      basic_block c = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb;
 
       /* If the very first block is the one with the fall-through exit
 	 edge, we have to split that block.  */
@@ -4000,7 +4014,7 @@ force_one_exit_fallthru (void)
   edge_iterator ei;
   basic_block forwarder, bb;
 
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     if (e->flags & EDGE_FALLTHRU)
       {
 	if (predecessor == NULL)
@@ -4018,7 +4032,8 @@ force_one_exit_fallthru (void)
   /* Exit has several fallthru predecessors.  Create a forwarder block for
      them.  */
   forwarder = split_edge (predecessor);
-  for (ei = ei_start (EXIT_BLOCK_PTR->preds); (e = ei_safe_edge (ei)); )
+  for (ei = ei_start (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds);
+       (e = ei_safe_edge (ei)); )
     {
       if (e->src == forwarder
 	  || !(e->flags & EDGE_FALLTHRU))
@@ -4166,7 +4181,7 @@ cfg_layout_duplicate_bb (basic_block bb)
   insn = duplicate_insn_chain (BB_HEAD (bb), BB_END (bb));
   new_bb = create_basic_block (insn,
 			       insn ? get_last_insn () : NULL,
-			       EXIT_BLOCK_PTR->prev_bb);
+			       EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb);
 
   BB_COPY_PARTITION (new_bb, bb);
   if (BB_HEADER (bb))
@@ -4313,14 +4328,14 @@ cfg_layout_redirect_edge_and_branch (edge e, basic_block dest)
   if (e->dest == dest)
     return e;
 
-  if (e->src != ENTRY_BLOCK_PTR
+  if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
       && (ret = try_redirect_by_replacing_jump (e, dest, true)))
     {
       df_set_bb_dirty (src);
       return ret;
     }
 
-  if (e->src == ENTRY_BLOCK_PTR
+  if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun)
       && (e->flags & EDGE_FALLTHRU) && !(e->flags & EDGE_COMPLEX))
     {
       if (dump_file)
@@ -4447,7 +4462,7 @@ cfg_layout_delete_block (basic_block bb)
 	    set_last_insn (insn);
 	}
     }
-  if (bb->next_bb != EXIT_BLOCK_PTR)
+  if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     to = &BB_HEADER (bb->next_bb);
   else
     to = &cfg_layout_function_footer;
@@ -4504,7 +4519,7 @@ cfg_layout_can_merge_blocks_p (basic_block a, basic_block b)
   if (NEXT_INSN (BB_END (a)) != BB_HEAD (b))
     {
       edge e = find_fallthru_edge (b->succs);
-      if (e && e->dest == EXIT_BLOCK_PTR)
+      if (e && e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	return false;
     }
 
@@ -4515,7 +4530,8 @@ cfg_layout_can_merge_blocks_p (basic_block a, basic_block b)
 	  && a != b
 	  /* Must be simple edge.  */
 	  && !(single_succ_edge (a)->flags & EDGE_COMPLEX)
-	  && a != ENTRY_BLOCK_PTR && b != EXIT_BLOCK_PTR
+	  && a != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	  && b != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  /* If the jump insn has side effects, we can't kill the edge.
 	     When not optimizing, try_redirect_by_replacing_jump will
 	     not allow us to redirect an edge by replacing a table jump.  */
@@ -4634,11 +4650,11 @@ static basic_block
 cfg_layout_split_edge (edge e)
 {
   basic_block new_bb =
-    create_basic_block (e->src != ENTRY_BLOCK_PTR
+    create_basic_block (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 			? NEXT_INSN (BB_END (e->src)) : get_insns (),
 			NULL_RTX, e->src);
 
-  if (e->dest == EXIT_BLOCK_PTR)
+  if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
     BB_COPY_PARTITION (new_bb, e->src);
   else
     BB_COPY_PARTITION (new_bb, e->dest);
@@ -4663,7 +4679,8 @@ rtl_block_empty_p (basic_block bb)
 {
   rtx insn;
 
-  if (bb == ENTRY_BLOCK_PTR || bb == EXIT_BLOCK_PTR)
+  if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)
+      || bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return true;
 
   FOR_BB_INSNS (bb, insn)
@@ -4770,7 +4787,8 @@ rtl_flow_call_edges_add (sbitmap blocks)
   if (! blocks)
     check_last_block = true;
   else
-    check_last_block = bitmap_bit_p (blocks, EXIT_BLOCK_PTR->prev_bb->index);
+    check_last_block = bitmap_bit_p (blocks,
+				     EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb->index);
 
   /* In the last basic block, before epilogue generation, there will be
      a fallthru edge to EXIT.  Special care is required if the last insn
@@ -4786,7 +4804,7 @@ rtl_flow_call_edges_add (sbitmap blocks)
      Handle this by adding a dummy instruction in a new last basic block.  */
   if (check_last_block)
     {
-      basic_block bb = EXIT_BLOCK_PTR->prev_bb;
+      basic_block bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
       rtx insn = BB_END (bb);
 
       /* Back up past insns that must be kept in the same block as a call.  */
@@ -4798,7 +4816,7 @@ rtl_flow_call_edges_add (sbitmap blocks)
 	{
 	  edge e;
 
-	  e = find_edge (bb, EXIT_BLOCK_PTR);
+	  e = find_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun));
 	  if (e)
 	    {
 	      insert_insn_on_edge (gen_use (const0_rtx), e);
@@ -4846,7 +4864,7 @@ rtl_flow_call_edges_add (sbitmap blocks)
 #ifdef ENABLE_CHECKING
 	      if (split_at_insn == BB_END (bb))
 		{
-		  e = find_edge (bb, EXIT_BLOCK_PTR);
+		  e = find_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun));
 		  gcc_assert (e == NULL);
 		}
 #endif
@@ -4860,7 +4878,7 @@ rtl_flow_call_edges_add (sbitmap blocks)
 		    blocks_split++;
 		}
 
-	      make_edge (bb, EXIT_BLOCK_PTR, EDGE_FAKE);
+	      make_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FAKE);
 	    }
 
 	  if (insn == BB_HEAD (bb))
@@ -4952,7 +4970,7 @@ rtl_can_remove_branch_p (const_edge e)
   const_rtx insn = BB_END (src), set;
 
   /* The conditions are taken from try_redirect_by_replacing_jump.  */
-  if (target == EXIT_BLOCK_PTR)
+  if (target == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return false;
 
   if (e->flags & (EDGE_ABNORMAL_CALL | EDGE_EH))
diff --git a/gcc/cgraphbuild.c b/gcc/cgraphbuild.c
index 7834b06..21f6ebe 100644
--- a/gcc/cgraphbuild.c
+++ b/gcc/cgraphbuild.c
@@ -198,7 +198,7 @@ record_eh_tables (struct cgraph_node *node, struct function *fun)
 int
 compute_call_stmt_bb_frequency (tree decl, basic_block bb)
 {
-  int entry_freq = ENTRY_BLOCK_PTR_FOR_FUNCTION
+  int entry_freq = ENTRY_BLOCK_PTR_FOR_FN
   		     (DECL_STRUCT_FUNCTION (decl))->frequency;
   int freq = bb->frequency;
 
@@ -441,7 +441,7 @@ rebuild_cgraph_edges (void)
   cgraph_node_remove_callees (node);
   ipa_remove_all_references (&node->ref_list);
 
-  node->count = ENTRY_BLOCK_PTR->count;
+  node->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
 
   FOR_EACH_BB (bb)
     {
@@ -493,7 +493,7 @@ cgraph_rebuild_references (void)
     else
       i++;
 
-  node->count = ENTRY_BLOCK_PTR->count;
+  node->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
 
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/cgraphunit.c b/gcc/cgraphunit.c
index b84e198..fb23abe 100644
--- a/gcc/cgraphunit.c
+++ b/gcc/cgraphunit.c
@@ -1336,10 +1336,10 @@ init_lowered_empty_function (tree decl, bool in_ssa)
   loops_for_fn (cfun)->state |= LOOPS_MAY_HAVE_MULTIPLE_LATCHES;
 
   /* Create BB for body of the function and connect it properly.  */
-  bb = create_basic_block (NULL, (void *) 0, ENTRY_BLOCK_PTR);
-  make_edge (ENTRY_BLOCK_PTR, bb, EDGE_FALLTHRU);
-  make_edge (bb, EXIT_BLOCK_PTR, 0);
-  add_bb_to_loop (bb, ENTRY_BLOCK_PTR->loop_father);
+  bb = create_basic_block (NULL, (void *) 0, ENTRY_BLOCK_PTR_FOR_FN (cfun));
+  make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), bb, EDGE_FALLTHRU);
+  make_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), 0);
+  add_bb_to_loop (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun)->loop_father);
 
   return bb;
 }
@@ -1627,7 +1627,7 @@ expand_thunk (struct cgraph_node *node, bool output_asm_thunks)
 		  gsi_insert_after (&bsi, stmt, GSI_NEW_STMT);
 		  make_edge (bb, then_bb, EDGE_TRUE_VALUE);
 		  make_edge (bb, else_bb, EDGE_FALSE_VALUE);
-		  make_edge (return_bb, EXIT_BLOCK_PTR, 0);
+		  make_edge (return_bb, EXIT_BLOCK_PTR_FOR_FN (cfun), 0);
 		  make_edge (then_bb, return_bb, EDGE_FALLTHRU);
 		  make_edge (else_bb, return_bb, EDGE_FALLTHRU);
 		  bsi = gsi_last_bb (then_bb);
diff --git a/gcc/combine.c b/gcc/combine.c
index fb5c881..d685a7f 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -1157,7 +1157,7 @@ combine_instructions (rtx f, unsigned int nregs)
   setup_incoming_promotions (first);
   /* Allow the entry block and the first block to fall into the same EBB.
      Conceptually the incoming promotions are assigned to the entry block.  */
-  last_bb = ENTRY_BLOCK_PTR;
+  last_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   create_log_links ();
   FOR_EACH_BB (this_basic_block)
@@ -1209,7 +1209,7 @@ combine_instructions (rtx f, unsigned int nregs)
   label_tick = label_tick_ebb_start = 1;
   init_reg_last ();
   setup_incoming_promotions (first);
-  last_bb = ENTRY_BLOCK_PTR;
+  last_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   FOR_EACH_BB (this_basic_block)
     {
@@ -1592,7 +1592,7 @@ set_nonzero_bits_and_sign_copies (rtx x, const_rtx set, void *data)
       /* If this register is undefined at the start of the file, we can't
 	 say what its contents were.  */
       && ! REGNO_REG_SET_P
-           (DF_LR_IN (ENTRY_BLOCK_PTR->next_bb), REGNO (x))
+	   (DF_LR_IN (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb), REGNO (x))
       && HWI_COMPUTABLE_MODE_P (GET_MODE (x)))
     {
       reg_stat_type *rsp = &reg_stat[REGNO (x)];
@@ -3938,7 +3938,7 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx i0, int *new_direct_jump_p,
 	ni2dest = SET_DEST (newi2pat);
 
       for (insn = NEXT_INSN (i3);
-	   insn && (this_basic_block->next_bb == EXIT_BLOCK_PTR
+	   insn && (this_basic_block->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
 		    || insn != BB_HEAD (this_basic_block->next_bb));
 	   insn = NEXT_INSN (insn))
 	{
@@ -4054,7 +4054,8 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx i0, int *new_direct_jump_p,
 	      && ! find_reg_note (i2, REG_UNUSED,
 				  SET_DEST (XVECEXP (PATTERN (i2), 0, i))))
 	    for (temp = NEXT_INSN (i2);
-		 temp && (this_basic_block->next_bb == EXIT_BLOCK_PTR
+		 temp
+		 && (this_basic_block->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
 			  || BB_HEAD (this_basic_block) != temp);
 		 temp = NEXT_INSN (temp))
 	      if (temp != i3 && INSN_P (temp))
@@ -9468,7 +9469,8 @@ reg_nonzero_bits_for_combine (const_rtx x, enum machine_mode mode,
 	  || (REGNO (x) >= FIRST_PSEUDO_REGISTER
 	      && REG_N_SETS (REGNO (x)) == 1
 	      && !REGNO_REG_SET_P
-	          (DF_LR_IN (ENTRY_BLOCK_PTR->next_bb), REGNO (x)))))
+		  (DF_LR_IN (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb),
+		   REGNO (x)))))
     {
       *nonzero &= rsp->last_set_nonzero_bits;
       return NULL;
@@ -9535,7 +9537,8 @@ reg_num_sign_bit_copies_for_combine (const_rtx x, enum machine_mode mode,
 	  || (REGNO (x) >= FIRST_PSEUDO_REGISTER
 	      && REG_N_SETS (REGNO (x)) == 1
 	      && !REGNO_REG_SET_P
-	          (DF_LR_IN (ENTRY_BLOCK_PTR->next_bb), REGNO (x)))))
+		  (DF_LR_IN (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb),
+		   REGNO (x)))))
     {
       *result = rsp->last_set_sign_bit_copies;
       return NULL;
@@ -12564,7 +12567,8 @@ get_last_value_validate (rtx *loc, rtx insn, int tick, int replace)
 	      || (! (regno >= FIRST_PSEUDO_REGISTER
 		     && REG_N_SETS (regno) == 1
 		     && (!REGNO_REG_SET_P
-			 (DF_LR_IN (ENTRY_BLOCK_PTR->next_bb), regno)))
+			 (DF_LR_IN (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb),
+			  regno)))
 		  && rsp->last_set_label > tick))
 	  {
 	    if (replace)
@@ -12679,7 +12683,7 @@ get_last_value (const_rtx x)
 	  && (regno < FIRST_PSEUDO_REGISTER
 	      || REG_N_SETS (regno) != 1
 	      || REGNO_REG_SET_P
-		 (DF_LR_IN (ENTRY_BLOCK_PTR->next_bb), regno))))
+		 (DF_LR_IN (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb), regno))))
     return 0;
 
   /* If the value was set in a later insn than the ones we are processing,
@@ -13740,7 +13744,7 @@ distribute_links (struct insn_link *links)
 	 since most links don't point very far away.  */
 
       for (insn = NEXT_INSN (link->insn);
-	   (insn && (this_basic_block->next_bb == EXIT_BLOCK_PTR
+	   (insn && (this_basic_block->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
 		     || BB_HEAD (this_basic_block->next_bb) != insn));
 	   insn = NEXT_INSN (insn))
 	if (DEBUG_INSN_P (insn))
diff --git a/gcc/config/alpha/alpha.c b/gcc/config/alpha/alpha.c
index a5171ea..c55835e 100644
--- a/gcc/config/alpha/alpha.c
+++ b/gcc/config/alpha/alpha.c
@@ -4835,7 +4835,8 @@ alpha_gp_save_rtx (void)
 	 label.  Emit the sequence properly on the edge.  We are only
 	 invoked from dw2_build_landing_pads and finish_eh_generation
 	 will call commit_edge_insertions thanks to a kludge.  */
-      insert_insn_on_edge (seq, single_succ_edge (ENTRY_BLOCK_PTR));
+      insert_insn_on_edge (seq,
+			   single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
 
       cfun->machine->gp_save_rtx = m;
     }
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index 3cd53b0..e8b5f83 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -5943,7 +5943,8 @@ require_pic_register (void)
 	         we can't yet emit instructions directly in the final
 		 insn stream.  Queue the insns on the entry edge, they will
 		 be committed after everything else is expanded.  */
-	      insert_insn_on_edge (seq, single_succ_edge (ENTRY_BLOCK_PTR));
+	      insert_insn_on_edge (seq,
+				   single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
 	    }
 	}
     }
@@ -18386,7 +18387,8 @@ arm_r3_live_at_start_p (void)
   /* Just look at cfg info, which is still close enough to correct at this
      point.  This gives false positives for broken functions that might use
      uninitialized data that happens to be allocated in r3, but who cares?  */
-  return REGNO_REG_SET_P (df_get_live_out (ENTRY_BLOCK_PTR), 3);
+  return REGNO_REG_SET_P (df_get_live_out (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
+			  3);
 }
 
 /* Compute the number of bytes used to store the static chain register on the
@@ -19919,7 +19921,7 @@ any_sibcall_could_use_r3 (void)
 
   if (!crtl->tail_call_emit)
     return false;
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     if (e->flags & EDGE_SIBCALL)
       {
 	rtx call = BB_END (e->src);
diff --git a/gcc/config/bfin/bfin.c b/gcc/config/bfin/bfin.c
index 0d473cb..d7af939 100644
--- a/gcc/config/bfin/bfin.c
+++ b/gcc/config/bfin/bfin.c
@@ -3600,7 +3600,7 @@ hwloop_optimize (hwloop_info loop)
 
       if (single_pred_p (bb)
 	  && single_pred_edge (bb)->flags & EDGE_FALLTHRU
-	  && single_pred (bb) != ENTRY_BLOCK_PTR)
+	  && single_pred (bb) != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  bb = single_pred (bb);
 	  last_insn = BB_END (bb);
diff --git a/gcc/config/frv/frv.c b/gcc/config/frv/frv.c
index 6e74fe4..a5eb2c1 100644
--- a/gcc/config/frv/frv.c
+++ b/gcc/config/frv/frv.c
@@ -8027,7 +8027,7 @@ frv_optimize_membar_global (basic_block bb, struct frv_io *first_io,
   /* We need to keep the membar if there is an edge to the exit block.  */
   FOR_EACH_EDGE (succ, ei, bb->succs)
   /* for (succ = bb->succ; succ != 0; succ = succ->succ_next) */
-    if (succ->dest == EXIT_BLOCK_PTR)
+    if (succ->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
       return;
 
   /* Work out the union of all successor blocks.  */
diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index 7ae9f57..b702413 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -5593,7 +5593,7 @@ ix86_eax_live_at_start_p (void)
      to correct at this point.  This gives false positives for broken
      functions that might use uninitialized data that happens to be
      allocated in eax, but who cares?  */
-  return REGNO_REG_SET_P (df_get_live_out (ENTRY_BLOCK_PTR), 0);
+  return REGNO_REG_SET_P (df_get_live_out (ENTRY_BLOCK_PTR_FOR_FN (cfun)), 0);
 }
 
 static bool
@@ -9301,7 +9301,7 @@ ix86_compute_frame_layout (struct ix86_frame *frame)
      Recompute the value as needed.  Do not recompute when amount of registers
      didn't change as reload does multiple calls to the function and does not
      expect the decision to change within single iteration.  */
-  else if (!optimize_bb_for_size_p (ENTRY_BLOCK_PTR)
+  else if (!optimize_bb_for_size_p (ENTRY_BLOCK_PTR_FOR_FN (cfun))
            && cfun->machine->use_fast_prologue_epilogue_nregs != frame->nregs)
     {
       int count = frame->nregs;
@@ -11390,7 +11390,7 @@ ix86_expand_epilogue (int style)
       /* Leave results in shorter dependency chains on CPUs that are
 	 able to grok it fast.  */
       else if (TARGET_USE_LEAVE
-	       || optimize_bb_for_size_p (EXIT_BLOCK_PTR)
+	       || optimize_bb_for_size_p (EXIT_BLOCK_PTR_FOR_FN (cfun))
 	       || !cfun->machine->use_fast_prologue_epilogue)
 	ix86_emit_leave ();
       else
@@ -29838,7 +29838,7 @@ add_condition_to_bb (tree function_decl, tree version_decl,
   make_edge (bb1, bb3, EDGE_FALSE_VALUE); 
 
   remove_edge (e23);
-  make_edge (bb2, EXIT_BLOCK_PTR, 0);
+  make_edge (bb2, EXIT_BLOCK_PTR_FOR_FN (cfun), 0);
 
   pop_cfun ();
 
@@ -36573,7 +36573,7 @@ ix86_pad_returns (void)
   edge e;
   edge_iterator ei;
 
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     {
       basic_block bb = e->src;
       rtx ret = BB_END (bb);
@@ -36673,14 +36673,14 @@ ix86_count_insn (basic_block bb)
       edge prev_e;
       edge_iterator prev_ei;
 
-      if (e->src == ENTRY_BLOCK_PTR)
+      if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  min_prev_count = 0;
 	  break;
 	}
       FOR_EACH_EDGE (prev_e, prev_ei, e->src->preds)
 	{
-	  if (prev_e->src == ENTRY_BLOCK_PTR)
+	  if (prev_e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      int count = ix86_count_insn_bb (e->src);
 	      if (count < min_prev_count)
@@ -36704,7 +36704,7 @@ ix86_pad_short_function (void)
   edge e;
   edge_iterator ei;
 
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     {
       rtx ret = BB_END (e->src);
       if (JUMP_P (ret) && ANY_RETURN_P (PATTERN (ret)))
@@ -36744,7 +36744,7 @@ ix86_seh_fixup_eh_fallthru (void)
   edge e;
   edge_iterator ei;
 
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     {
       rtx insn, next;
 
diff --git a/gcc/config/ia64/ia64.c b/gcc/config/ia64/ia64.c
index 307681c..71bc666 100644
--- a/gcc/config/ia64/ia64.c
+++ b/gcc/config/ia64/ia64.c
@@ -3492,7 +3492,7 @@ ia64_expand_prologue (void)
       edge e;
       edge_iterator ei;
 
-      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
 	if ((e->flags & EDGE_FAKE) == 0
 	    && (e->flags & EDGE_FALLTHRU) != 0)
 	  break;
@@ -10187,7 +10187,8 @@ ia64_asm_unwind_emit (FILE *asm_out_file, rtx insn)
 
   if (NOTE_INSN_BASIC_BLOCK_P (insn))
     {
-      last_block = NOTE_BASIC_BLOCK (insn)->next_bb == EXIT_BLOCK_PTR;
+      last_block = NOTE_BASIC_BLOCK (insn)->next_bb
+     == EXIT_BLOCK_PTR_FOR_FN (cfun);
 
       /* Restore unwind state from immediately before the epilogue.  */
       if (need_copy_state)
diff --git a/gcc/config/nds32/nds32.c b/gcc/config/nds32/nds32.c
index 4454bf2..008f088 100644
--- a/gcc/config/nds32/nds32.c
+++ b/gcc/config/nds32/nds32.c
@@ -4566,7 +4566,7 @@ nds32_fp_as_gp_check_available (void)
       || frame_pointer_needed
       || NDS32_REQUIRED_CALLEE_SAVED_P (FP_REGNUM)
       || (cfun->stdarg == 1)
-      || (find_fallthru_edge (EXIT_BLOCK_PTR->preds) == NULL))
+      || (find_fallthru_edge (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) == NULL))
     return 0;
 
   /* Now we can check the possibility of using fp_as_gp optimization.  */
diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c
index 5c39d94..7556eb6 100644
--- a/gcc/config/rs6000/rs6000.c
+++ b/gcc/config/rs6000/rs6000.c
@@ -22953,7 +22953,7 @@ rs6000_emit_prologue (void)
 				      && DEFAULT_ABI == ABI_V4
 				      && flag_pic
 				      && ! info->lr_save_p
-				      && EDGE_COUNT (EXIT_BLOCK_PTR->preds) > 0);
+				      && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) > 0);
       if (save_LR_around_toc_setup)
 	{
 	  rtx lr = gen_rtx_REG (Pmode, LR_REGNO);
diff --git a/gcc/cprop.c b/gcc/cprop.c
index 35a44f2..9b8bd1e 100644
--- a/gcc/cprop.c
+++ b/gcc/cprop.c
@@ -967,7 +967,7 @@ cprop_jump (basic_block bb, rtx setcc, rtx jump, rtx from, rtx src)
       edge_iterator ei;
 
       FOR_EACH_EDGE (e, ei, bb->succs)
-	if (e->dest != EXIT_BLOCK_PTR
+	if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	    && BB_HEAD (e->dest) == JUMP_LABEL (jump))
 	  {
 	    e->flags |= EDGE_FALLTHRU;
@@ -1376,7 +1376,7 @@ find_implicit_sets (void)
 	? BRANCH_EDGE (bb)->dest : FALLTHRU_EDGE (bb)->dest;
 
       /* If DEST doesn't go anywhere, ignore it.  */
-      if (! dest || dest == EXIT_BLOCK_PTR)
+      if (! dest || dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	continue;
 
       /* We have found a suitable implicit set.  Try to record it now as
@@ -1612,7 +1612,7 @@ bypass_block (basic_block bb, rtx setcc, rtx jump)
 	  old_dest = e->dest;
 	  if (dest != NULL
 	      && dest != old_dest
-	      && dest != EXIT_BLOCK_PTR)
+	      && dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
             {
 	      redirect_edge_and_branch_force (e, dest);
 
@@ -1664,15 +1664,15 @@ bypass_conditional_jumps (void)
   rtx dest;
 
   /* Note we start at block 1.  */
-  if (ENTRY_BLOCK_PTR->next_bb == EXIT_BLOCK_PTR)
+  if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return 0;
 
   bypass_last_basic_block = last_basic_block;
   mark_dfs_back_edges ();
 
   changed = 0;
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb->next_bb,
-		  EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb->next_bb,
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       /* Check for more than one predecessor.  */
       if (!single_pred_p (bb))
@@ -1836,7 +1836,8 @@ one_cprop_pass (void)
       /* Allocate vars to track sets of regs.  */
       reg_set_bitmap = ALLOC_REG_SET (NULL);
 
-      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb->next_bb, EXIT_BLOCK_PTR,
+      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb->next_bb,
+		      EXIT_BLOCK_PTR_FOR_FN (cfun),
 		      next_bb)
 	{
 	  /* Reset tables used to keep track of what's still valid [since
diff --git a/gcc/cse.c b/gcc/cse.c
index 43fa1e8..e0f7796 100644
--- a/gcc/cse.c
+++ b/gcc/cse.c
@@ -6200,7 +6200,7 @@ cse_find_path (basic_block first_bb, struct cse_basic_block_data *data,
 	      && e == BRANCH_EDGE (previous_bb_in_path))
 	    {
 	      bb = FALLTHRU_EDGE (previous_bb_in_path)->dest;
-	      if (bb != EXIT_BLOCK_PTR
+	      if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
 		  && single_pred_p (bb)
 		  /* We used to assert here that we would only see blocks
 		     that we have not visited yet.  But we may end up
@@ -6254,7 +6254,7 @@ cse_find_path (basic_block first_bb, struct cse_basic_block_data *data,
 
 	  if (e
 	      && !((e->flags & EDGE_ABNORMAL_CALL) && cfun->has_nonlocal_label)
-	      && e->dest != EXIT_BLOCK_PTR
+	      && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	      && single_pred_p (e->dest)
 	      /* Avoid visiting basic blocks twice.  The large comment
 		 above explains why this can happen.  */
@@ -7166,7 +7166,7 @@ cse_cc_succs (basic_block bb, basic_block orig_bb, rtx cc_reg, rtx cc_src,
 	continue;
 
       if (EDGE_COUNT (e->dest->preds) != 1
-	  || e->dest == EXIT_BLOCK_PTR
+	  || e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  /* Avoid endless recursion on unreachable blocks.  */
 	  || e->dest == orig_bb)
 	continue;
diff --git a/gcc/df-problems.c b/gcc/df-problems.c
index 59fc2f6..c6349c8 100644
--- a/gcc/df-problems.c
+++ b/gcc/df-problems.c
@@ -1007,7 +1007,7 @@ static void
 df_lr_confluence_0 (basic_block bb)
 {
   bitmap op1 = &df_lr_get_bb_info (bb->index)->out;
-  if (bb != EXIT_BLOCK_PTR)
+  if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     bitmap_copy (op1, &df->hardware_regs_used);
 }
 
diff --git a/gcc/df-scan.c b/gcc/df-scan.c
index aace96d..eb7e4d4 100644
--- a/gcc/df-scan.c
+++ b/gcc/df-scan.c
@@ -3873,7 +3873,7 @@ df_entry_block_defs_collect (struct df_collection_rec *collection_rec,
   EXECUTE_IF_SET_IN_BITMAP (entry_block_defs, 0, i, bi)
     {
       df_ref_record (DF_REF_ARTIFICIAL, collection_rec, regno_reg_rtx[i], NULL,
-		     ENTRY_BLOCK_PTR, NULL, DF_REF_REG_DEF, 0);
+		     ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, DF_REF_REG_DEF, 0);
     }
 
   df_canonize_collection_rec (collection_rec);
@@ -4034,17 +4034,17 @@ df_exit_block_uses_collect (struct df_collection_rec *collection_rec, bitmap exi
 
   EXECUTE_IF_SET_IN_BITMAP (exit_block_uses, 0, i, bi)
     df_ref_record (DF_REF_ARTIFICIAL, collection_rec, regno_reg_rtx[i], NULL,
-		   EXIT_BLOCK_PTR, NULL, DF_REF_REG_USE, 0);
+		   EXIT_BLOCK_PTR_FOR_FN (cfun), NULL, DF_REF_REG_USE, 0);
 
 #if FRAME_POINTER_REGNUM != ARG_POINTER_REGNUM
   /* It is deliberate that this is not put in the exit block uses but
      I do not know why.  */
   if (reload_completed
       && !bitmap_bit_p (exit_block_uses, ARG_POINTER_REGNUM)
-      && bb_has_eh_pred (EXIT_BLOCK_PTR)
+      && bb_has_eh_pred (EXIT_BLOCK_PTR_FOR_FN (cfun))
       && fixed_regs[ARG_POINTER_REGNUM])
     df_ref_record (DF_REF_ARTIFICIAL, collection_rec, regno_reg_rtx[ARG_POINTER_REGNUM], NULL,
-		   EXIT_BLOCK_PTR, NULL, DF_REF_REG_USE, 0);
+		   EXIT_BLOCK_PTR_FOR_FN (cfun), NULL, DF_REF_REG_USE, 0);
 #endif
 
   df_canonize_collection_rec (collection_rec);
diff --git a/gcc/dominance.c b/gcc/dominance.c
index 6530109..3d88c0d 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -240,14 +240,14 @@ calc_dfs_tree_nonrec (struct dom_info *di, basic_block bb, bool reverse)
   if (reverse)
     {
       ei = ei_start (bb->preds);
-      en_block = EXIT_BLOCK_PTR;
-      ex_block = ENTRY_BLOCK_PTR;
+      en_block = EXIT_BLOCK_PTR_FOR_FN (cfun);
+      ex_block = ENTRY_BLOCK_PTR_FOR_FN (cfun);
     }
   else
     {
       ei = ei_start (bb->succs);
-      en_block = ENTRY_BLOCK_PTR;
-      ex_block = EXIT_BLOCK_PTR;
+      en_block = ENTRY_BLOCK_PTR_FOR_FN (cfun);
+      ex_block = EXIT_BLOCK_PTR_FOR_FN (cfun);
     }
 
   /* When the stack is empty we break out of this loop.  */
@@ -333,7 +333,8 @@ static void
 calc_dfs_tree (struct dom_info *di, bool reverse)
 {
   /* The first block is the ENTRY_BLOCK (or EXIT_BLOCK if REVERSE).  */
-  basic_block begin = reverse ? EXIT_BLOCK_PTR : ENTRY_BLOCK_PTR;
+  basic_block begin = (reverse
+		       ? EXIT_BLOCK_PTR_FOR_FN (cfun) : ENTRY_BLOCK_PTR_FOR_FN (cfun));
   di->dfs_order[last_basic_block] = di->dfsnum;
   di->dfs_to_bb[di->dfsnum] = begin;
   di->dfsnum++;
@@ -501,9 +502,9 @@ calc_idoms (struct dom_info *di, bool reverse)
   edge_iterator ei, einext;
 
   if (reverse)
-    en_block = EXIT_BLOCK_PTR;
+    en_block = EXIT_BLOCK_PTR_FOR_FN (cfun);
   else
-    en_block = ENTRY_BLOCK_PTR;
+    en_block = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   /* Go backwards in DFS order, to first look at the leafs.  */
   v = di->nodes;
@@ -1097,7 +1098,7 @@ prune_bbs_to_update_dominators (vec<basic_block> bbs,
 
   for (i = 0; bbs.iterate (i, &bb);)
     {
-      if (bb == ENTRY_BLOCK_PTR)
+      if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	goto succeed;
 
       if (single_pred_p (bb))
@@ -1171,7 +1172,7 @@ determine_dominators_for_sons (struct graph *g, vec<basic_block> bbs,
   if (son[y] == -1)
     return;
   if (y == (int) bbs.length ())
-    ybb = ENTRY_BLOCK_PTR;
+    ybb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
   else
     ybb = bbs[y];
 
@@ -1344,7 +1345,7 @@ iterate_fix_dominators (enum cdi_direction dir, vec<basic_block> bbs,
 	set_immediate_dominator (CDI_DOMINATORS, bb, NULL);
       *map->insert (bb) = i;
     }
-  *map->insert (ENTRY_BLOCK_PTR) = n;
+  *map->insert (ENTRY_BLOCK_PTR_FOR_FN (cfun)) = n;
 
   g = new_graph (n + 1);
   for (y = 0; y < g->n_vertices; y++)
diff --git a/gcc/domwalk.c b/gcc/domwalk.c
index 4c7354e..3350e4b 100644
--- a/gcc/domwalk.c
+++ b/gcc/domwalk.c
@@ -169,8 +169,8 @@ dom_walker::walk (basic_block bb)
     {
       /* Don't worry about unreachable blocks.  */
       if (EDGE_COUNT (bb->preds) > 0
-	  || bb == ENTRY_BLOCK_PTR
-	  || bb == EXIT_BLOCK_PTR)
+	  || bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	  || bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  /* Callback for subclasses to do custom things before we have walked
 	     the dominator children, but before we walk statements.  */
diff --git a/gcc/dse.c b/gcc/dse.c
index 9662da8..6584ea3 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -2751,7 +2751,7 @@ dse_step1 (void)
 	  if (stores_off_frame_dead_at_return
 	      && (EDGE_COUNT (bb->succs) == 0
 		  || (single_succ_p (bb)
-		      && single_succ (bb) == EXIT_BLOCK_PTR
+		      && single_succ (bb) == EXIT_BLOCK_PTR_FOR_FN (cfun)
 		      && ! crtl->calls_eh_return)))
 	    {
 	      insn_info_t i_ptr = active_local_stores;
diff --git a/gcc/except.c b/gcc/except.c
index f8296b2..f7dc193 100644
--- a/gcc/except.c
+++ b/gcc/except.c
@@ -1241,7 +1241,7 @@ sjlj_emit_function_enter (rtx dispatch_label)
       }
 
   if (fn_begin_outside_block)
-    insert_insn_on_edge (seq, single_succ_edge (ENTRY_BLOCK_PTR));
+    insert_insn_on_edge (seq, single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
   else
     emit_insn_after (seq, fn_begin);
 }
@@ -1509,7 +1509,7 @@ finish_eh_generation (void)
 
   if (targetm_common.except_unwind_info (&global_options) == UI_SJLJ
       /* Kludge for Alpha (see alpha_gp_save_rtx).  */
-      || single_succ_edge (ENTRY_BLOCK_PTR)->insns.r)
+      || single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun))->insns.r)
     commit_edge_insertions ();
 
   /* Redirect all EH edges from the post_landing_pad to the landing pad.  */
diff --git a/gcc/final.c b/gcc/final.c
index 2d206f1..f2adde9 100644
--- a/gcc/final.c
+++ b/gcc/final.c
@@ -762,7 +762,7 @@ compute_alignments (void)
 	  && (branch_frequency > freq_threshold
 	      || (bb->frequency > bb->prev_bb->frequency * 10
 		  && (bb->prev_bb->frequency
-		      <= ENTRY_BLOCK_PTR->frequency / 2))))
+		      <= ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency / 2))))
 	{
 	  log = JUMP_ALIGN (label);
 	  if (dump_file)
diff --git a/gcc/function.c b/gcc/function.c
index 87953e3..fde4a8e 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -3978,7 +3978,8 @@ regno_clobbered_at_setjmp (bitmap setjmp_crosses, int regno)
     return false;
 
   return ((REG_N_SETS (regno) > 1
-	   || REGNO_REG_SET_P (df_get_live_out (ENTRY_BLOCK_PTR), regno))
+	   || REGNO_REG_SET_P (df_get_live_out (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
+			       regno))
 	  && REGNO_REG_SET_P (setjmp_crosses, regno));
 }
 
@@ -5400,7 +5401,7 @@ next_block_for_reg (basic_block bb, int regno, int end_regno)
 
   /* We can sometimes encounter dead code.  Don't try to move it
      into the exit block.  */
-  if (!live_edge || live_edge->dest == EXIT_BLOCK_PTR)
+  if (!live_edge || live_edge->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return NULL;
 
   /* Reject targets of abnormal edges.  This is needed for correctness
@@ -5725,7 +5726,7 @@ convert_jumps_to_returns (basic_block last_bb, bool simple_p,
 
   src_bbs.create (EDGE_COUNT (last_bb->preds));
   FOR_EACH_EDGE (e, ei, last_bb->preds)
-    if (e->src != ENTRY_BLOCK_PTR)
+    if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
       src_bbs.quick_push (e->src);
 
   label = BB_HEAD (last_bb);
@@ -5805,7 +5806,7 @@ convert_jumps_to_returns (basic_block last_bb, bool simple_p,
 	}
 
       /* Fix up the CFG for the successful change we just made.  */
-      redirect_edge_succ (e, EXIT_BLOCK_PTR);
+      redirect_edge_succ (e, EXIT_BLOCK_PTR_FOR_FN (cfun));
       e->flags &= ~EDGE_CROSSING;
     }
   src_bbs.release ();
@@ -5897,7 +5898,7 @@ thread_prologue_and_epilogue_insns (void)
 
   df_analyze ();
 
-  rtl_profile_for_bb (ENTRY_BLOCK_PTR);
+  rtl_profile_for_bb (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
   inserted = false;
   seq = NULL_RTX;
@@ -5907,8 +5908,8 @@ thread_prologue_and_epilogue_insns (void)
   /* Can't deal with multiple successors of the entry block at the
      moment.  Function should always have at least one entry
      point.  */
-  gcc_assert (single_succ_p (ENTRY_BLOCK_PTR));
-  entry_edge = single_succ_edge (ENTRY_BLOCK_PTR);
+  gcc_assert (single_succ_p (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
+  entry_edge = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   orig_entry_edge = entry_edge;
 
   split_prologue_seq = NULL_RTX;
@@ -6081,7 +6082,7 @@ thread_prologue_and_epilogue_insns (void)
 	  basic_block tmp_bb = vec.pop ();
 
 	  FOR_EACH_EDGE (e, ei, tmp_bb->succs)
-	    if (e->dest != EXIT_BLOCK_PTR
+	    if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 		&& bitmap_set_bit (&bb_flags, e->dest->index))
 	      vec.quick_push (e->dest);
 	}
@@ -6089,7 +6090,7 @@ thread_prologue_and_epilogue_insns (void)
       /* Find the set of basic blocks that need no prologue, have a
 	 single successor, can be duplicated, meet a max size
 	 requirement, and go to the exit via like blocks.  */
-      vec.quick_push (EXIT_BLOCK_PTR);
+      vec.quick_push (EXIT_BLOCK_PTR_FOR_FN (cfun));
       while (!vec.is_empty ())
 	{
 	  basic_block tmp_bb = vec.pop ();
@@ -6266,7 +6267,7 @@ thread_prologue_and_epilogue_insns (void)
 		  {
 		    /* Otherwise put the copy at the end of the function.  */
 		    copy_bb = create_basic_block (NULL_RTX, NULL_RTX,
-						  EXIT_BLOCK_PTR->prev_bb);
+						  EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb);
 		    BB_COPY_PARTITION (copy_bb, bb);
 		  }
 
@@ -6280,7 +6281,7 @@ thread_prologue_and_epilogue_insns (void)
 		    dup_block_and_redirect (tbb, copy_bb, insert_point,
 					    &bb_flags);
 		    tbb = single_succ (tbb);
-		    if (tbb == EXIT_BLOCK_PTR)
+		    if (tbb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 		      break;
 		    e = split_block (copy_bb, PREV_INSN (insert_point));
 		    copy_bb = e->dest;
@@ -6294,7 +6295,8 @@ thread_prologue_and_epilogue_insns (void)
 		if (CALL_P (PREV_INSN (insert_point))
 		    && SIBLING_CALL_P (PREV_INSN (insert_point)))
 		  eflags = EDGE_SIBCALL | EDGE_ABNORMAL;
-		make_single_succ_edge (copy_bb, EXIT_BLOCK_PTR, eflags);
+		make_single_succ_edge (copy_bb, EXIT_BLOCK_PTR_FOR_FN (cfun),
+				       eflags);
 
 		/* verify_flow_info doesn't like a note after a
 		   sibling call.  */
@@ -6325,15 +6327,15 @@ thread_prologue_and_epilogue_insns (void)
 
   /* If the exit block has no non-fake predecessors, we don't need
      an epilogue.  */
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     if ((e->flags & EDGE_FAKE) == 0)
       break;
   if (e == NULL)
     goto epilogue_done;
 
-  rtl_profile_for_bb (EXIT_BLOCK_PTR);
+  rtl_profile_for_bb (EXIT_BLOCK_PTR_FOR_FN (cfun));
 
-  exit_fallthru_edge = find_fallthru_edge (EXIT_BLOCK_PTR->preds);
+  exit_fallthru_edge = find_fallthru_edge (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds);
 
   /* If we're allowed to generate a simple return instruction, then by
      definition we don't need a full epilogue.  If the last basic
@@ -6349,10 +6351,10 @@ thread_prologue_and_epilogue_insns (void)
 
 	  /* convert_jumps_to_returns may add to EXIT_BLOCK_PTR->preds
 	     (but won't remove).  Stop at end of current preds.  */
-	  last = EDGE_COUNT (EXIT_BLOCK_PTR->preds);
+	  last = EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds);
 	  for (i = 0; i < last; i++)
 	    {
-	      e = EDGE_I (EXIT_BLOCK_PTR->preds, i);
+	      e = EDGE_I (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds, i);
 	      if (LABEL_P (BB_HEAD (e->src))
 		  && !bitmap_bit_p (&bb_flags, e->src->index)
 		  && !active_insn_between (BB_HEAD (e->src), BB_END (e->src)))
@@ -6416,7 +6418,7 @@ thread_prologue_and_epilogue_insns (void)
      code.  In order to be able to properly annotate these with unwind
      info, try to split them now.  If we get a valid split, drop an
      EPILOGUE_BEG note and mark the insns as epilogue insns.  */
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     {
       rtx prev, last, trial;
 
@@ -6507,7 +6509,7 @@ epilogue_done:
 
       /* The epilogue insns we inserted may cause the exit edge to no longer
 	 be fallthru.  */
-      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
 	{
 	  if (((e->flags & EDGE_FALLTHRU) != 0)
 	      && returnjump_p (BB_END (e->src)))
@@ -6544,7 +6546,7 @@ epilogue_done:
 	}
 
       /* Also check returns we might need to add to tail blocks.  */
-      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
 	if (EDGE_COUNT (e->src->preds) != 0
 	    && (e->flags & EDGE_FAKE) != 0
 	    && !bitmap_bit_p (&bb_flags, e->src->index))
@@ -6559,7 +6561,7 @@ epilogue_done:
          inserting new BBs at the end of the function. Do this
          after the call to split_block above which may split
          the original exit pred.  */
-      exit_pred = EXIT_BLOCK_PTR->prev_bb;
+      exit_pred = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
 
       FOR_EACH_VEC_ELT (unconverted_simple_returns, i, e)
 	{
@@ -6596,7 +6598,7 @@ epilogue_done:
 	      emit_barrier_after (start);
 
 	      *pdest_bb = bb;
-	      make_edge (bb, EXIT_BLOCK_PTR, 0);
+	      make_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), 0);
 	    }
 	  redirect_edge_and_branch_force (e, *pdest_bb);
 	}
@@ -6605,7 +6607,7 @@ epilogue_done:
 
   if (entry_edge != orig_entry_edge)
     {
-      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
 	if (EDGE_COUNT (e->src->preds) != 0
 	    && (e->flags & EDGE_FAKE) != 0
 	    && !bitmap_bit_p (&bb_flags, e->src->index))
@@ -6618,7 +6620,9 @@ epilogue_done:
 
 #ifdef HAVE_sibcall_epilogue
   /* Emit sibling epilogues before any sibling call sites.  */
-  for (ei = ei_start (EXIT_BLOCK_PTR->preds); (e = ei_safe_edge (ei)); )
+  for (ei = ei_start (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds); (e =
+							     ei_safe_edge (ei));
+							     )
     {
       basic_block bb = e->src;
       rtx insn = BB_END (bb);
@@ -6749,7 +6753,7 @@ reposition_prologue_and_epilogue_notes (void)
       edge_iterator ei;
       edge e;
 
-      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
 	{
 	  rtx insn, first = NULL, note = NULL;
 	  basic_block bb = e->src;
diff --git a/gcc/gcse.c b/gcc/gcse.c
index a37ac6b..3012c4d 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -2063,7 +2063,7 @@ pre_expr_reaches_here_p_work (basic_block occr_bb, struct expr *expr,
     {
       basic_block pred_bb = pred->src;
 
-      if (pred->src == ENTRY_BLOCK_PTR
+      if (pred->src == ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	  /* Has predecessor has already been visited?  */
 	  || visited[pred_bb->index])
 	;/* Nothing to do.  */
@@ -2830,7 +2830,7 @@ compute_code_hoist_vbeinout (void)
 	 the convergence.  */
       FOR_EACH_BB_REVERSE (bb)
 	{
-	  if (bb->next_bb != EXIT_BLOCK_PTR)
+	  if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      bitmap_intersection_of_succs (hoist_vbeout[bb->index],
 					    hoist_vbein, bb);
@@ -2908,7 +2908,7 @@ update_bb_reg_pressure (basic_block bb, rtx from)
       FOR_EACH_EDGE (succ, ei, bb->succs)
 	{
 	  succ_bb = succ->dest;
-	  if (succ_bb == EXIT_BLOCK_PTR)
+	  if (succ_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    continue;
 
 	  if (bitmap_bit_p (BB_DATA (succ_bb)->live_in, REGNO (dreg)))
@@ -3041,7 +3041,7 @@ should_hoist_expr_to_dom (basic_block expr_bb, struct expr *expr,
     {
       basic_block pred_bb = pred->src;
 
-      if (pred->src == ENTRY_BLOCK_PTR)
+      if (pred->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	break;
       else if (pred_bb == expr_bb)
 	continue;
@@ -3185,16 +3185,16 @@ hoist_code (void)
       bb_size[bb->index] = to_head;
     }
 
-  gcc_assert (EDGE_COUNT (ENTRY_BLOCK_PTR->succs) == 1
-	      && (EDGE_SUCC (ENTRY_BLOCK_PTR, 0)->dest
-		  == ENTRY_BLOCK_PTR->next_bb));
+  gcc_assert (EDGE_COUNT (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs) == 1
+	      && (EDGE_SUCC (ENTRY_BLOCK_PTR_FOR_FN (cfun), 0)->dest
+		  == ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb));
 
   from_bbs = BITMAP_ALLOC (NULL);
   if (flag_ira_hoist_pressure)
     hoisted_bbs = BITMAP_ALLOC (NULL);
 
   dom_tree_walk = get_all_dominated_blocks (CDI_DOMINATORS,
-					    ENTRY_BLOCK_PTR->next_bb);
+					    ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb);
 
   /* Walk over each basic block looking for potentially hoistable
      expressions, nothing gets hoisted from the entry block.  */
diff --git a/gcc/gimple-iterator.c b/gcc/gimple-iterator.c
index 557bf35..a3e74fe 100644
--- a/gcc/gimple-iterator.c
+++ b/gcc/gimple-iterator.c
@@ -713,7 +713,7 @@ gimple_find_edge_insert_loc (edge e, gimple_stmt_iterator *gsi,
  restart:
   if (single_pred_p (dest)
       && gimple_seq_empty_p (phi_nodes (dest))
-      && dest != EXIT_BLOCK_PTR)
+      && dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       *gsi = gsi_start_bb (dest);
       if (gsi_end_p (*gsi))
@@ -744,7 +744,7 @@ gimple_find_edge_insert_loc (edge e, gimple_stmt_iterator *gsi,
   src = e->src;
   if ((e->flags & EDGE_ABNORMAL) == 0
       && single_succ_p (src)
-      && src != ENTRY_BLOCK_PTR)
+      && src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
     {
       *gsi = gsi_last_bb (src);
       if (gsi_end_p (*gsi))
@@ -830,7 +830,8 @@ gsi_commit_edge_inserts (void)
   edge e;
   edge_iterator ei;
 
-  gsi_commit_one_edge_insert (single_succ_edge (ENTRY_BLOCK_PTR), NULL);
+  gsi_commit_one_edge_insert (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
+			      NULL);
 
   FOR_EACH_BB (bb)
     FOR_EACH_EDGE (e, ei, bb->succs)
diff --git a/gcc/gimple-ssa-strength-reduction.c b/gcc/gimple-ssa-strength-reduction.c
index 4eb897f..72c6284 100644
--- a/gcc/gimple-ssa-strength-reduction.c
+++ b/gcc/gimple-ssa-strength-reduction.c
@@ -735,7 +735,7 @@ slsr_process_phi (gimple phi, bool speed)
 	  derived_base_name = arg;
 
 	  if (SSA_NAME_IS_DEFAULT_DEF (arg))
-	    arg_bb = single_succ (ENTRY_BLOCK_PTR);
+	    arg_bb = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 	  else
 	    gimple_bb (SSA_NAME_DEF_STMT (arg));
 	}
diff --git a/gcc/graph.c b/gcc/graph.c
index 1dc9dbc..b75135a 100644
--- a/gcc/graph.c
+++ b/gcc/graph.c
@@ -195,7 +195,7 @@ draw_cfg_nodes_for_loop (pretty_printer *pp, int funcdef_no,
   const char *fillcolors[3] = { "grey88", "grey77", "grey66" };
 
   if (loop->header != NULL
-      && loop->latch != EXIT_BLOCK_PTR)
+      && loop->latch != EXIT_BLOCK_PTR_FOR_FN (cfun))
     pp_printf (pp,
 	       "\tsubgraph cluster_%d_%d {\n"
 	       "\tstyle=\"filled\";\n"
@@ -214,7 +214,7 @@ draw_cfg_nodes_for_loop (pretty_printer *pp, int funcdef_no,
   if (loop->header == NULL)
     return;
 
-  if (loop->latch == EXIT_BLOCK_PTR)
+  if (loop->latch == EXIT_BLOCK_PTR_FOR_FN (cfun))
     body = get_loop_body (loop);
   else
     body = get_loop_body_in_bfs_order (loop);
@@ -228,7 +228,7 @@ draw_cfg_nodes_for_loop (pretty_printer *pp, int funcdef_no,
 
   free (body);
 
-  if (loop->latch != EXIT_BLOCK_PTR)
+  if (loop->latch != EXIT_BLOCK_PTR_FOR_FN (cfun))
     pp_printf (pp, "\t}\n");
 }
 
diff --git a/gcc/graphite-clast-to-gimple.c b/gcc/graphite-clast-to-gimple.c
index a661dbb..ad3e1dc 100644
--- a/gcc/graphite-clast-to-gimple.c
+++ b/gcc/graphite-clast-to-gimple.c
@@ -1098,7 +1098,7 @@ translate_clast_user (struct clast_user_stmt *stmt, edge next_e,
   gimple_bb_p gbb = PBB_BLACK_BOX (pbb);
   vec<tree> iv_map;
 
-  if (GBB_BB (gbb) == ENTRY_BLOCK_PTR)
+  if (GBB_BB (gbb) == ENTRY_BLOCK_PTR_FOR_FN (cfun))
     return next_e;
 
   nb_loops = number_of_loops (cfun);
diff --git a/gcc/graphite-scop-detection.c b/gcc/graphite-scop-detection.c
index 0017126..0cfb5a5 100644
--- a/gcc/graphite-scop-detection.c
+++ b/gcc/graphite-scop-detection.c
@@ -448,7 +448,7 @@ scopdet_basic_block_info (basic_block bb, loop_p outermost_loop,
   gimple stmt;
 
   /* XXX: ENTRY_BLOCK_PTR could be optimized in later steps.  */
-  basic_block entry_block = ENTRY_BLOCK_PTR;
+  basic_block entry_block = ENTRY_BLOCK_PTR_FOR_FN (cfun);
   stmt = harmful_stmt_in_bb (entry_block, outermost_loop, bb);
   result.difficult = (stmt != NULL);
   result.exit = NULL;
@@ -1030,7 +1030,7 @@ create_sese_edges (vec<sd_region> regions)
   FOR_EACH_VEC_ELT (regions, i, s)
     /* Don't handle multiple edges exiting the function.  */
     if (!find_single_exit_edge (s)
-	&& s->exit != EXIT_BLOCK_PTR)
+	&& s->exit != EXIT_BLOCK_PTR_FOR_FN (cfun))
       create_single_exit_edge (s);
 
   unmark_exit_edges (regions);
@@ -1402,7 +1402,8 @@ build_scops (vec<scop_p> *scops)
   stack_vec<sd_region, 3> regions;
 
   canonicalize_loop_closed_ssa_form ();
-  build_scops_1 (single_succ (ENTRY_BLOCK_PTR), ENTRY_BLOCK_PTR->loop_father,
+  build_scops_1 (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
+		 ENTRY_BLOCK_PTR_FOR_FN (cfun)->loop_father,
 		 &regions, loop);
   create_sese_edges (regions);
   build_graphite_scops (regions, scops);
diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index beddc11..c98b36c 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -1615,7 +1615,7 @@ priority (rtx insn)
 
           /* Selective scheduling does not define RECOVERY_BLOCK macro.  */
 	  rec = sel_sched_p () ? NULL : RECOVERY_BLOCK (insn);
-	  if (!rec || rec == EXIT_BLOCK_PTR)
+	  if (!rec || rec == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      prev_first = PREV_INSN (insn);
 	      twin = insn;
@@ -7522,7 +7522,7 @@ static void
 sched_extend_bb (void)
 {
   /* The following is done to keep current_sched_info->next_tail non null.  */
-  rtx end = BB_END (EXIT_BLOCK_PTR->prev_bb);
+  rtx end = BB_END (EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb);
   rtx insn = DEBUG_INSN_P (end) ? prev_nondebug_insn (end) : end;
   if (NEXT_INSN (end) == 0
       || (!NOTE_P (insn)
@@ -7533,7 +7533,7 @@ sched_extend_bb (void)
       rtx note = emit_note_after (NOTE_INSN_DELETED, end);
       /* Make note appear outside BB.  */
       set_block_for_insn (note, NULL);
-      BB_END (EXIT_BLOCK_PTR->prev_bb) = end;
+      BB_END (EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb) = end;
     }
 }
 
@@ -7551,7 +7551,7 @@ init_before_recovery (basic_block *before_recovery_ptr)
   basic_block last;
   edge e;
 
-  last = EXIT_BLOCK_PTR->prev_bb;
+  last = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
   e = find_fallthru_edge_from (last);
 
   if (e)
@@ -7591,7 +7591,8 @@ init_before_recovery (basic_block *before_recovery_ptr)
 
       redirect_edge_succ (e, single);
       make_single_succ_edge (single, empty, 0);
-      make_single_succ_edge (empty, EXIT_BLOCK_PTR, EDGE_FALLTHRU);
+      make_single_succ_edge (empty, EXIT_BLOCK_PTR_FOR_FN (cfun),
+			     EDGE_FALLTHRU);
 
       label = block_label (empty);
       x = emit_jump_insn_after (gen_jump (label), BB_END (single));
@@ -7734,14 +7735,14 @@ create_check_block_twin (rtx insn, bool mutate_p)
     }
   else
     {
-      rec = EXIT_BLOCK_PTR;
+      rec = EXIT_BLOCK_PTR_FOR_FN (cfun);
       label = NULL_RTX;
     }
 
   /* Emit CHECK.  */
   check = targetm.sched.gen_spec_check (insn, label, todo_spec);
 
-  if (rec != EXIT_BLOCK_PTR)
+  if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       /* To have mem_reg alive at the beginning of second_bb,
 	 we emit check BEFORE insn, so insn after splitting
@@ -7774,7 +7775,7 @@ create_check_block_twin (rtx insn, bool mutate_p)
 
   /* Initialize TWIN (twin is a duplicate of original instruction
      in the recovery block).  */
-  if (rec != EXIT_BLOCK_PTR)
+  if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       sd_iterator_def sd_it;
       dep_t dep;
@@ -7811,7 +7812,7 @@ create_check_block_twin (rtx insn, bool mutate_p)
      provide correct value for INSN_TICK (TWIN).  */
   sd_copy_back_deps (twin, insn, true);
 
-  if (rec != EXIT_BLOCK_PTR)
+  if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
     /* In case of branchy check, fix CFG.  */
     {
       basic_block first_bb, second_bb;
@@ -7823,7 +7824,7 @@ create_check_block_twin (rtx insn, bool mutate_p)
       sched_create_recovery_edges (first_bb, rec, second_bb);
 
       sched_init_only_bb (second_bb, first_bb);
-      sched_init_only_bb (rec, EXIT_BLOCK_PTR);
+      sched_init_only_bb (rec, EXIT_BLOCK_PTR_FOR_FN (cfun));
 
       jump = BB_END (rec);
       haifa_init_insn (jump);
@@ -7864,7 +7865,7 @@ create_check_block_twin (rtx insn, bool mutate_p)
       init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
       sd_add_dep (new_dep, false);
 
-      if (rec != EXIT_BLOCK_PTR)
+      if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  DEP_CON (new_dep) = twin;
 	  sd_add_dep (new_dep, false);
@@ -7913,7 +7914,7 @@ create_check_block_twin (rtx insn, bool mutate_p)
   /* Future speculations: call the helper.  */
   process_insn_forw_deps_be_in_spec (insn, twin, fs);
 
-  if (rec != EXIT_BLOCK_PTR)
+  if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       /* Which types of dependencies should we use here is,
 	 generally, machine-dependent question...  But, for now,
@@ -8127,7 +8128,7 @@ unlink_bb_notes (basic_block first, basic_block last)
   bb_header = XNEWVEC (rtx, last_basic_block);
 
   /* Make a sentinel.  */
-  if (last->next_bb != EXIT_BLOCK_PTR)
+  if (last->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     bb_header[last->next_bb->index] = 0;
 
   first = first->next_bb;
@@ -8171,7 +8172,7 @@ restore_bb_notes (basic_block first)
   first = first->next_bb;
   /* Remember: FIRST is actually a second basic block in the ebb.  */
 
-  while (first != EXIT_BLOCK_PTR
+  while (first != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	 && bb_header[first->index])
     {
       rtx prev, label, note, next;
diff --git a/gcc/hw-doloop.c b/gcc/hw-doloop.c
index 5d26638..77c8149 100644
--- a/gcc/hw-doloop.c
+++ b/gcc/hw-doloop.c
@@ -260,7 +260,7 @@ discover_loop (hwloop_info loop, basic_block tail_bb, rtx tail_insn, rtx reg)
     {
       edge e;
       edge_iterator ei;
-      if (bb == EXIT_BLOCK_PTR)
+      if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  /* We've reached the exit block.  The loop must be bad. */
 	  if (dump_file)
@@ -539,7 +539,7 @@ reorder_loops (hwloop_info loops)
   
   FOR_EACH_BB (bb)
     {
-      if (bb->next_bb != EXIT_BLOCK_PTR)
+      if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	bb->aux = bb->next_bb;
       else
 	bb->aux = NULL;
diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c
index 17d26c5..ac0276c 100644
--- a/gcc/ifcvt.c
+++ b/gcc/ifcvt.c
@@ -3185,7 +3185,8 @@ merge_if_block (struct ce_if_block * ce_info)
       /* There should still be something at the end of the THEN or ELSE
          blocks taking us to our final destination.  */
 	gcc_assert (JUMP_P (last)
-		    || (EDGE_SUCC (combo_bb, 0)->dest == EXIT_BLOCK_PTR
+		    || (EDGE_SUCC (combo_bb, 0)->dest
+			== EXIT_BLOCK_PTR_FOR_FN (cfun)
 			&& CALL_P (last)
 			&& SIBLING_CALL_P (last))
 		    || ((EDGE_SUCC (combo_bb, 0)->flags & EDGE_EH)
@@ -3199,7 +3200,7 @@ merge_if_block (struct ce_if_block * ce_info)
      may be zero incoming edges if the THEN block didn't actually join
      back up (as with a call to a non-return function).  */
   else if (EDGE_COUNT (join_bb->preds) < 2
-	   && join_bb != EXIT_BLOCK_PTR)
+	   && join_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       /* We can merge the JOIN cleanly and update the dataflow try
 	 again on this pass.*/
@@ -3216,7 +3217,7 @@ merge_if_block (struct ce_if_block * ce_info)
 		  && single_succ (combo_bb) == join_bb);
 
       /* Remove the jump and cruft from the end of the COMBO block.  */
-      if (join_bb != EXIT_BLOCK_PTR)
+      if (join_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	tidy_fallthru_edge (single_succ_edge (combo_bb));
     }
 
@@ -3495,7 +3496,7 @@ cond_exec_find_if_block (struct ce_if_block * ce_info)
      code processing.  ??? we should fix this in the future.  */
   if (EDGE_COUNT (then_bb->succs) == 0)
     {
-      if (single_pred_p (else_bb) && else_bb != EXIT_BLOCK_PTR)
+      if (single_pred_p (else_bb) && else_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  rtx last_insn = BB_END (then_bb);
 
@@ -3586,7 +3587,8 @@ cond_exec_find_if_block (struct ce_if_block * ce_info)
   next = then_bb;
   if (else_bb && (next = next->next_bb) != else_bb)
     return FALSE;
-  if ((next = next->next_bb) != join_bb && join_bb != EXIT_BLOCK_PTR)
+  if ((next = next->next_bb) != join_bb
+      && join_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       if (else_bb)
 	join_bb = NULL;
@@ -3725,7 +3727,7 @@ block_has_only_trap (basic_block bb)
   rtx trap;
 
   /* We're not the exit block.  */
-  if (bb == EXIT_BLOCK_PTR)
+  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return NULL_RTX;
 
   /* The block must have no successors.  */
@@ -3881,7 +3883,7 @@ find_if_case_1 (basic_block test_bb, edge then_edge, edge else_edge)
 				    predictable_edge_p (then_edge)))))
     return FALSE;
 
-  if (else_bb == EXIT_BLOCK_PTR)
+  if (else_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       rtx jump = BB_END (else_edge->src);
       gcc_assert (JUMP_P (jump));
@@ -3902,12 +3904,12 @@ find_if_case_1 (basic_block test_bb, edge then_edge, edge else_edge)
 
   if (then_bb->next_bb == else_bb
       && then_bb->prev_bb == test_bb
-      && else_bb != EXIT_BLOCK_PTR)
+      && else_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       redirect_edge_succ (FALLTHRU_EDGE (test_bb), else_bb);
       new_bb = 0;
     }
-  else if (else_bb == EXIT_BLOCK_PTR)
+  else if (else_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     new_bb = force_nonfallthru_and_redirect (FALLTHRU_EDGE (test_bb),
 					     else_bb, else_target);
   else
@@ -4196,9 +4198,9 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
 	 saved in caller-saved regs.  A caller-saved reg requires the
 	 prologue, killing a shrink-wrap opportunity.  */
       if ((flag_shrink_wrap && HAVE_simple_return && !epilogue_completed)
-	  && ENTRY_BLOCK_PTR->next_bb == test_bb
+	  && ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb == test_bb
 	  && single_succ_p (new_dest)
-	  && single_succ (new_dest) == EXIT_BLOCK_PTR
+	  && single_succ (new_dest) == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && bitmap_intersect_p (df_get_live_in (new_dest), merge_set))
 	{
 	  regset return_regs;
@@ -4213,8 +4215,10 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
 		&& targetm.calls.function_value_regno_p (i))
 	      bitmap_set_bit (return_regs, INCOMING_REGNO (i));
 
-	  bitmap_and_into (return_regs, df_get_live_out (ENTRY_BLOCK_PTR));
-	  bitmap_and_into (return_regs, df_get_live_in (EXIT_BLOCK_PTR));
+	  bitmap_and_into (return_regs,
+			   df_get_live_out (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
+	  bitmap_and_into (return_regs,
+			   df_get_live_in (EXIT_BLOCK_PTR_FOR_FN (cfun)));
 	  if (!bitmap_empty_p (return_regs))
 	    {
 	      FOR_BB_INSNS_REVERSE (new_dest, insn)
@@ -4259,7 +4263,7 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
     {
       if (JUMP_P (BB_END (dest_edge->src)))
 	new_dest_label = JUMP_LABEL (BB_END (dest_edge->src));
-      else if (new_dest == EXIT_BLOCK_PTR)
+      else if (new_dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	new_dest_label = ret_rtx;
       else
 	new_dest_label = block_label (new_dest);
diff --git a/gcc/ipa-inline-analysis.c b/gcc/ipa-inline-analysis.c
index 3cd335f..3d95de1 100644
--- a/gcc/ipa-inline-analysis.c
+++ b/gcc/ipa-inline-analysis.c
@@ -1841,9 +1841,9 @@ compute_bb_predicates (struct cgraph_node *node,
     }
 
   /* Entry block is always executable.  */
-  ENTRY_BLOCK_PTR_FOR_FUNCTION (my_function)->aux
+  ENTRY_BLOCK_PTR_FOR_FN (my_function)->aux
     = pool_alloc (edge_predicate_pool);
-  *(struct predicate *) ENTRY_BLOCK_PTR_FOR_FUNCTION (my_function)->aux
+  *(struct predicate *) ENTRY_BLOCK_PTR_FOR_FN (my_function)->aux
     = true_predicate ();
 
   /* A simple dataflow propagation of predicates forward in the CFG.
@@ -2066,7 +2066,7 @@ record_modified (ao_ref *ao ATTRIBUTE_UNUSED, tree vdef, void *data)
     return false;
   bitmap_set_bit (info->bb_set,
 		  SSA_NAME_IS_DEFAULT_DEF (vdef)
-		  ? ENTRY_BLOCK_PTR->index
+		  ? ENTRY_BLOCK_PTR_FOR_FN (cfun)->index
 		  : gimple_bb (SSA_NAME_DEF_STMT (vdef))->index);
   return false;
 }
@@ -2102,7 +2102,7 @@ param_change_prob (gimple stmt, int i)
 	return REG_BR_PROB_BASE;
 
       if (SSA_NAME_IS_DEFAULT_DEF (op))
-	init_freq = ENTRY_BLOCK_PTR->frequency;
+	init_freq = ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency;
       else
 	init_freq = gimple_bb (SSA_NAME_DEF_STMT (op))->frequency;
 
@@ -2142,8 +2142,8 @@ param_change_prob (gimple stmt, int i)
       /* Assume that every memory is initialized at entry.
          TODO: Can we easilly determine if value is always defined
          and thus we may skip entry block?  */
-      if (ENTRY_BLOCK_PTR->frequency)
-	max = ENTRY_BLOCK_PTR->frequency;
+      if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency)
+	max = ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency;
       else
 	max = 1;
 
diff --git a/gcc/ipa-pure-const.c b/gcc/ipa-pure-const.c
index 9e5b1ab..ed96c3c 100644
--- a/gcc/ipa-pure-const.c
+++ b/gcc/ipa-pure-const.c
@@ -1587,7 +1587,7 @@ local_pure_const (void)
 
   /* Do NORETURN discovery.  */
   if (!skip && !TREE_THIS_VOLATILE (current_function_decl)
-      && EDGE_COUNT (EXIT_BLOCK_PTR->preds) == 0)
+      && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) == 0)
     {
       warn_function_noreturn (cfun->decl);
       if (dump_file)
@@ -1723,7 +1723,7 @@ static unsigned int
 execute_warn_function_noreturn (void)
 {
   if (!TREE_THIS_VOLATILE (current_function_decl)
-      && EDGE_COUNT (EXIT_BLOCK_PTR->preds) == 0)
+      && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) == 0)
     warn_function_noreturn (current_function_decl);
   return 0;
 }
diff --git a/gcc/ipa-split.c b/gcc/ipa-split.c
index 59d1742..d7d6b8f 100644
--- a/gcc/ipa-split.c
+++ b/gcc/ipa-split.c
@@ -210,7 +210,7 @@ verify_non_ssa_vars (struct split_point *current, bitmap non_ssa_vars,
   bool ok = true;
 
   FOR_EACH_EDGE (e, ei, current->entry_bb->preds)
-    if (e->src != ENTRY_BLOCK_PTR
+    if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	&& !bitmap_bit_p (current->split_bbs, e->src->index))
       {
         worklist.safe_push (e->src);
@@ -223,7 +223,7 @@ verify_non_ssa_vars (struct split_point *current, bitmap non_ssa_vars,
       basic_block bb = worklist.pop ();
 
       FOR_EACH_EDGE (e, ei, bb->preds)
-	if (e->src != ENTRY_BLOCK_PTR
+	if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	    && bitmap_set_bit (seen, e->src->index))
 	  {
 	    gcc_checking_assert (!bitmap_bit_p (current->split_bbs,
@@ -396,7 +396,7 @@ consider_split (struct split_point *current, bitmap non_ssa_vars,
 
   /* Do not split when we would end up calling function anyway.  */
   if (incoming_freq
-      >= (ENTRY_BLOCK_PTR->frequency
+      >= (ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency
 	  * PARAM_VALUE (PARAM_PARTIAL_INLINING_ENTRY_PROBABILITY) / 100))
     {
       /* When profile is guessed, we can not expect it to give us
@@ -406,13 +406,13 @@ consider_split (struct split_point *current, bitmap non_ssa_vars,
 	 is likely noticeable win.  */
       if (back_edge
 	  && profile_status != PROFILE_READ
-	  && incoming_freq < ENTRY_BLOCK_PTR->frequency)
+	  && incoming_freq < ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency)
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
 	    fprintf (dump_file,
 		     "  Split before loop, accepting despite low frequencies %i %i.\n",
 		     incoming_freq,
-		     ENTRY_BLOCK_PTR->frequency);
+		     ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency);
 	}
       else
 	{
@@ -583,7 +583,7 @@ consider_split (struct split_point *current, bitmap non_ssa_vars,
 
   /* split_function fixes up at most one PHI non-virtual PHI node in return_bb,
      for the return value.  If there are other PHIs, give up.  */
-  if (return_bb != EXIT_BLOCK_PTR)
+  if (return_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       gimple_stmt_iterator psi;
 
@@ -650,15 +650,15 @@ static basic_block
 find_return_bb (void)
 {
   edge e;
-  basic_block return_bb = EXIT_BLOCK_PTR;
+  basic_block return_bb = EXIT_BLOCK_PTR_FOR_FN (cfun);
   gimple_stmt_iterator bsi;
   bool found_return = false;
   tree retval = NULL_TREE;
 
-  if (!single_pred_p (EXIT_BLOCK_PTR))
+  if (!single_pred_p (EXIT_BLOCK_PTR_FOR_FN (cfun)))
     return return_bb;
 
-  e = single_pred_edge (EXIT_BLOCK_PTR);
+  e = single_pred_edge (EXIT_BLOCK_PTR_FOR_FN (cfun));
   for (bsi = gsi_last_bb (e->src); !gsi_end_p (bsi); gsi_prev (&bsi))
     {
       gimple stmt = gsi_stmt (bsi);
@@ -937,7 +937,7 @@ find_split_points (int overall_time, int overall_size)
   current.split_size = 0;
   current.ssa_names_to_pass = BITMAP_ALLOC (NULL);
 
-  first.bb = ENTRY_BLOCK_PTR;
+  first.bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
   first.edge_num = 0;
   first.overall_time = 0;
   first.overall_size = 0;
@@ -946,7 +946,7 @@ find_split_points (int overall_time, int overall_size)
   first.used_ssa_names = 0;
   first.bbs_visited = 0;
   stack.safe_push (first);
-  ENTRY_BLOCK_PTR->aux = (void *)(intptr_t)-1;
+  ENTRY_BLOCK_PTR_FOR_FN (cfun)->aux = (void *)(intptr_t)-1;
 
   while (!stack.is_empty ())
     {
@@ -957,7 +957,7 @@ find_split_points (int overall_time, int overall_size)
          articulation, we want to have processed everything reachable
 	 from articulation but nothing that reaches into it.  */
       if (entry->edge_num == EDGE_COUNT (entry->bb->succs)
-	  && entry->bb != ENTRY_BLOCK_PTR)
+	  && entry->bb != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  int pos = stack.length ();
 	  entry->can_split &= visit_bb (entry->bb, return_bb,
@@ -1009,7 +1009,7 @@ find_split_points (int overall_time, int overall_size)
 	  entry->edge_num++;
 
 	  /* New BB to visit, push it to the stack.  */
-	  if (dest != return_bb && dest != EXIT_BLOCK_PTR
+	  if (dest != return_bb && dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	      && !dest->aux)
 	    {
 	      stack_entry new_entry;
@@ -1037,7 +1037,7 @@ find_split_points (int overall_time, int overall_size)
 	}
       /* We are done with examining the edges.  Pop off the value from stack
 	 and merge stuff we accumulate during the walk.  */
-      else if (entry->bb != ENTRY_BLOCK_PTR)
+      else if (entry->bb != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  stack_entry *prev = &stack[stack.length () - 2];
 
@@ -1063,7 +1063,7 @@ find_split_points (int overall_time, int overall_size)
       else
         stack.pop ();
     }
-  ENTRY_BLOCK_PTR->aux = NULL;
+  ENTRY_BLOCK_PTR_FOR_FN (cfun)->aux = NULL;
   FOR_EACH_BB (bb)
     bb->aux = NULL;
   stack.release ();
@@ -1139,7 +1139,7 @@ split_function (struct split_point *split_point)
   if (!split_part_return_p)
     ;
   /* We have no return block, so nothing is needed.  */
-  else if (return_bb == EXIT_BLOCK_PTR)
+  else if (return_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     ;
   /* When we do not want to return value, we need to construct
      new return block with empty return statement.
@@ -1166,7 +1166,7 @@ split_function (struct split_point *split_point)
 		break;
 	      }
 	}
-      e = make_edge (new_return_bb, EXIT_BLOCK_PTR, 0);
+      e = make_edge (new_return_bb, EXIT_BLOCK_PTR_FOR_FN (cfun), 0);
       e->probability = REG_BR_PROB_BASE;
       e->count = new_return_bb->count;
       if (current_loops)
@@ -1183,7 +1183,7 @@ split_function (struct split_point *split_point)
 
      Note this can happen whether or not we have a return value.  If we have
      a return value, then RETURN_BB may have PHIs for real operands too.  */
-  if (return_bb != EXIT_BLOCK_PTR)
+  if (return_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       bool phi_p = false;
       for (gsi = gsi_start_phis (return_bb); !gsi_end_p (gsi);)
@@ -1325,7 +1325,7 @@ split_function (struct split_point *split_point)
       push_cfun (DECL_STRUCT_FUNCTION (node->decl));
       var = BLOCK_VARS (DECL_INITIAL (node->decl));
       i = vec_safe_length (*debug_args);
-      cgsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR));
+      cgsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
       do
 	{
 	  i -= 2;
@@ -1366,13 +1366,14 @@ split_function (struct split_point *split_point)
   else
     {
       e = make_edge (call_bb, return_bb,
-		     return_bb == EXIT_BLOCK_PTR ? 0 : EDGE_FALLTHRU);
+		     return_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
+		     ? 0 : EDGE_FALLTHRU);
       e->count = call_bb->count;
       e->probability = REG_BR_PROB_BASE;
 
       /* If there is return basic block, see what value we need to store
          return value into and put call just before it.  */
-      if (return_bb != EXIT_BLOCK_PTR)
+      if (return_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  real_retval = retval = find_retval (return_bb);
 
diff --git a/gcc/ira-build.c b/gcc/ira-build.c
index ca6f64d..e249ba0 100644
--- a/gcc/ira-build.c
+++ b/gcc/ira-build.c
@@ -1745,7 +1745,7 @@ ira_loop_tree_body_rev_postorder (ira_loop_tree_node_t loop_node ATTRIBUTE_UNUSE
 		  ira_loop_tree_node_t pred_node;
 		  basic_block pred_bb = e->src;
 
-		  if (e->src == ENTRY_BLOCK_PTR)
+		  if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 		    continue;
 
 		  pred_node = IRA_BB_NODE_BY_INDEX (pred_bb->index);
diff --git a/gcc/ira-color.c b/gcc/ira-color.c
index 6c52a2b..30282aa 100644
--- a/gcc/ira-color.c
+++ b/gcc/ira-color.c
@@ -3100,7 +3100,7 @@ print_loop_title (ira_loop_tree_node_t loop_tree_node)
       {
 	fprintf (ira_dump_file, " %d", subloop_node->bb->index);
 	FOR_EACH_EDGE (e, ei, subloop_node->bb->succs)
-	  if (e->dest != EXIT_BLOCK_PTR
+	  if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	      && ((dest_loop_node = IRA_BB_NODE (e->dest)->parent)
 		  != loop_tree_node))
 	    fprintf (ira_dump_file, "(->%d:l%d)",
diff --git a/gcc/ira-emit.c b/gcc/ira-emit.c
index cdd6941..198fa47 100644
--- a/gcc/ira-emit.c
+++ b/gcc/ira-emit.c
@@ -403,7 +403,7 @@ entered_from_non_parent_p (ira_loop_tree_node_t loop_node)
     if (bb_node->bb != NULL)
       {
 	FOR_EACH_EDGE (e, ei, bb_node->bb->preds)
-	  if (e->src != ENTRY_BLOCK_PTR
+	  if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	      && (src_loop_node = IRA_BB_NODE (e->src)->parent) != loop_node)
 	    {
 	      for (parent = src_loop_node->parent;
@@ -1263,7 +1263,7 @@ ira_emit (bool loops_p)
       at_bb_start[bb->index] = NULL;
       at_bb_end[bb->index] = NULL;
       FOR_EACH_EDGE (e, ei, bb->succs)
-	if (e->dest != EXIT_BLOCK_PTR)
+	if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	  generate_edge_moves (e);
     }
   allocno_last_set
diff --git a/gcc/ira-int.h b/gcc/ira-int.h
index b9b21ba..b46e7b0 100644
--- a/gcc/ira-int.h
+++ b/gcc/ira-int.h
@@ -43,8 +43,9 @@ along with GCC; see the file COPYING3.  If not see
    executed, frequency is always equivalent.  Otherwise rescale the
    edge frequency.  */
 #define REG_FREQ_FROM_EDGE_FREQ(freq)					   \
-  (optimize_size || (flag_branch_probabilities && !ENTRY_BLOCK_PTR->count) \
-   ? REG_FREQ_MAX : (freq * REG_FREQ_MAX / BB_FREQ_MAX)			   \
+  (optimize_size || (flag_branch_probabilities				   \
+		     && !ENTRY_BLOCK_PTR_FOR_FN (cfun)->count)		   \
+   ? REG_FREQ_MAX : (freq * REG_FREQ_MAX / BB_FREQ_MAX)		   \
    ? (freq * REG_FREQ_MAX / BB_FREQ_MAX) : 1)
 
 /* A modified value of flag `-fira-verbose' used internally.  */
diff --git a/gcc/ira.c b/gcc/ira.c
index a813b02..f5a5af8 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -4865,7 +4865,7 @@ static bool
 split_live_ranges_for_shrink_wrap (void)
 {
   basic_block bb, call_dom = NULL;
-  basic_block first = single_succ (ENTRY_BLOCK_PTR);
+  basic_block first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   rtx insn, last_interesting_insn = NULL;
   bitmap_head need_new, reachable;
   vec<basic_block> queue;
@@ -4910,7 +4910,7 @@ split_live_ranges_for_shrink_wrap (void)
 
       bb = queue.pop ();
       FOR_EACH_EDGE (e, ei, bb->succs)
-	if (e->dest != EXIT_BLOCK_PTR
+	if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	    && bitmap_set_bit (&reachable, e->dest->index))
 	  queue.quick_push (e->dest);
     }
diff --git a/gcc/lcm.c b/gcc/lcm.c
index 6266d48..aa63c72 100644
--- a/gcc/lcm.c
+++ b/gcc/lcm.c
@@ -121,8 +121,8 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
 
   /* Mark blocks which are predecessors of the exit block so that we
      can easily identify them below.  */
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
-    e->src->aux = EXIT_BLOCK_PTR;
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
+    e->src->aux = EXIT_BLOCK_PTR_FOR_FN (cfun);
 
   /* Iterate until the worklist is empty.  */
   while (qlen)
@@ -134,7 +134,7 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
       if (qout >= qend)
 	qout = worklist;
 
-      if (bb->aux == EXIT_BLOCK_PTR)
+      if (bb->aux == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	/* Do not clear the aux field for blocks which are predecessors of
 	   the EXIT block.  That way we never add then to the worklist
 	   again.  */
@@ -153,7 +153,7 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
 	   to add the predecessors of this block to the worklist
 	   if they are not already on the worklist.  */
 	FOR_EACH_EDGE (e, ei, bb->preds)
-	  if (!e->src->aux && e->src != ENTRY_BLOCK_PTR)
+	  if (!e->src->aux && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      *qin++ = e->src;
 	      e->src->aux = e;
@@ -188,11 +188,11 @@ compute_earliest (struct edge_list *edge_list, int n_exprs, sbitmap *antin,
     {
       pred = INDEX_EDGE_PRED_BB (edge_list, x);
       succ = INDEX_EDGE_SUCC_BB (edge_list, x);
-      if (pred == ENTRY_BLOCK_PTR)
+      if (pred == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	bitmap_copy (earliest[x], antin[succ->index]);
       else
 	{
-	  if (succ == EXIT_BLOCK_PTR)
+	  if (succ == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    bitmap_clear (earliest[x]);
 	  else
 	    {
@@ -276,7 +276,7 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
      do not want to be overly optimistic.  Consider an outgoing edge from
      the entry block.  That edge should always have a LATER value the
      same as EARLIEST for that edge.  */
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     bitmap_copy (later[(size_t) e->aux], earliest[(size_t) e->aux]);
 
   /* Add all the blocks to the worklist.  This prevents an early exit from
@@ -317,7 +317,7 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
 				      antloc[e->src->index])
 	    /* If LATER for an outgoing edge was changed, then we need
 	       to add the target of the outgoing edge to the worklist.  */
-	    && e->dest != EXIT_BLOCK_PTR && e->dest->aux == 0)
+	    && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun) && e->dest->aux == 0)
 	  {
 	    *qin++ = e->dest;
 	    e->dest->aux = e;
@@ -331,7 +331,7 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
      for the EXIT block.  We allocated an extra entry in the LATERIN array
      for just this purpose.  */
   bitmap_ones (laterin[last_basic_block]);
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     bitmap_and (laterin[last_basic_block],
 		     laterin[last_basic_block],
 		     later[(size_t) e->aux]);
@@ -358,7 +358,7 @@ compute_insert_delete (struct edge_list *edge_list, sbitmap *antloc,
     {
       basic_block b = INDEX_EDGE_SUCC_BB (edge_list, x);
 
-      if (b == EXIT_BLOCK_PTR)
+      if (b == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	bitmap_and_compl (insert[x], later[x], laterin[last_basic_block]);
       else
 	bitmap_and_compl (insert[x], later[x], laterin[b->index]);
@@ -500,8 +500,8 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
 
   /* Mark blocks which are successors of the entry block so that we
      can easily identify them below.  */
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
-    e->dest->aux = ENTRY_BLOCK_PTR;
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
+    e->dest->aux = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   /* Iterate until the worklist is empty.  */
   while (qlen)
@@ -516,7 +516,7 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
       /* If one of the predecessor blocks is the ENTRY block, then the
 	 intersection of avouts is the null set.  We can identify such blocks
 	 by the special value in the AUX field in the block structure.  */
-      if (bb->aux == ENTRY_BLOCK_PTR)
+      if (bb->aux == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	/* Do not clear the aux field for blocks which are successors of the
 	   ENTRY block.  That way we never add then to the worklist again.  */
 	bitmap_clear (avin[bb->index]);
@@ -534,7 +534,7 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
 	   to add the successors of this block to the worklist
 	   if they are not already on the worklist.  */
 	FOR_EACH_EDGE (e, ei, bb->succs)
-	  if (!e->dest->aux && e->dest != EXIT_BLOCK_PTR)
+	  if (!e->dest->aux && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      *qin++ = e->dest;
 	      e->dest->aux = e;
@@ -570,11 +570,11 @@ compute_farthest (struct edge_list *edge_list, int n_exprs,
     {
       pred = INDEX_EDGE_PRED_BB (edge_list, x);
       succ = INDEX_EDGE_SUCC_BB (edge_list, x);
-      if (succ == EXIT_BLOCK_PTR)
+      if (succ == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	bitmap_copy (farthest[x], st_avout[pred->index]);
       else
 	{
-	  if (pred == ENTRY_BLOCK_PTR)
+	  if (pred == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    bitmap_clear (farthest[x]);
 	  else
 	    {
@@ -624,7 +624,7 @@ compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
      do not want to be overly optimistic.  Consider an incoming edge to
      the exit block.  That edge should always have a NEARER value the
      same as FARTHEST for that edge.  */
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     bitmap_copy (nearer[(size_t)e->aux], farthest[(size_t)e->aux]);
 
   /* Add all the blocks to the worklist.  This prevents an early exit
@@ -656,7 +656,7 @@ compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
 				      st_avloc[e->dest->index])
 	    /* If NEARER for an incoming edge was changed, then we need
 	       to add the source of the incoming edge to the worklist.  */
-	    && e->src != ENTRY_BLOCK_PTR && e->src->aux == 0)
+	    && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun) && e->src->aux == 0)
 	  {
 	    *tos++ = e->src;
 	    e->src->aux = e;
@@ -667,7 +667,7 @@ compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
      for the ENTRY block.  We allocated an extra entry in the NEAREROUT array
      for just this purpose.  */
   bitmap_ones (nearerout[last_basic_block]);
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     bitmap_and (nearerout[last_basic_block],
 		     nearerout[last_basic_block],
 		     nearer[(size_t) e->aux]);
@@ -693,7 +693,7 @@ compute_rev_insert_delete (struct edge_list *edge_list, sbitmap *st_avloc,
   for (x = 0; x < NUM_EDGES (edge_list); x++)
     {
       basic_block b = INDEX_EDGE_PRED_BB (edge_list, x);
-      if (b == ENTRY_BLOCK_PTR)
+      if (b == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	bitmap_and_compl (insert[x], nearer[x], nearerout[last_basic_block]);
       else
 	bitmap_and_compl (insert[x], nearer[x], nearerout[b->index]);
diff --git a/gcc/loop-iv.c b/gcc/loop-iv.c
index 97aa52f..c01ee17 100644
--- a/gcc/loop-iv.c
+++ b/gcc/loop-iv.c
@@ -1937,7 +1937,7 @@ simplify_using_initial_values (struct loop *loop, enum rtx_code op, rtx *expr)
     return;
 
   e = loop_preheader_edge (loop);
-  if (e->src == ENTRY_BLOCK_PTR)
+  if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
     return;
 
   altered = ALLOC_REG_SET (&reg_obstack);
@@ -2068,7 +2068,7 @@ simplify_using_initial_values (struct loop *loop, enum rtx_code op, rtx *expr)
 	}
 
       if (!single_pred_p (e->src)
-	  || single_pred (e->src) == ENTRY_BLOCK_PTR)
+	  || single_pred (e->src) == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	break;
       e = single_pred_edge (e->src);
     }
diff --git a/gcc/loop-unswitch.c b/gcc/loop-unswitch.c
index 671ec19..c8f1281 100644
--- a/gcc/loop-unswitch.c
+++ b/gcc/loop-unswitch.c
@@ -433,7 +433,7 @@ unswitch_loop (struct loop *loop, basic_block unswitch_on, rtx cond, rtx cinsn)
 
   /* Create a block with the condition.  */
   prob = true_edge->probability;
-  switch_bb = create_empty_bb (EXIT_BLOCK_PTR->prev_bb);
+  switch_bb = create_empty_bb (EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb);
   seq = compare_and_jump_seq (XEXP (cond, 0), XEXP (cond, 1), GET_CODE (cond),
 			      block_label (true_edge->dest),
 			      prob, cinsn);
diff --git a/gcc/lra-assigns.c b/gcc/lra-assigns.c
index 54ffc77..88fc693 100644
--- a/gcc/lra-assigns.c
+++ b/gcc/lra-assigns.c
@@ -612,7 +612,7 @@ find_hard_regno_for (int regno, int *cost, int try_only_hard_regno)
 		&& ! df_regs_ever_live_p (hard_regno + j))
 	      /* It needs save restore.	 */
 	      hard_regno_costs[hard_regno]
-		+= 2 * ENTRY_BLOCK_PTR->next_bb->frequency + 1;
+		+= 2 * ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb->frequency + 1;
 	  priority = targetm.register_priority (hard_regno);
 	  if (best_hard_regno < 0 || hard_regno_costs[hard_regno] < best_cost
 	      || (hard_regno_costs[hard_regno] == best_cost
diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
index ee82c6f..94b6e25 100644
--- a/gcc/lra-constraints.c
+++ b/gcc/lra-constraints.c
@@ -5295,7 +5295,8 @@ lra_inheritance (void)
 	{
 	  if (lra_dump_file != NULL)
 	    fprintf (lra_dump_file, " %d", bb->index);
-	  if (bb->next_bb == EXIT_BLOCK_PTR || LABEL_P (BB_HEAD (bb->next_bb)))
+	  if (bb->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
+	      || LABEL_P (BB_HEAD (bb->next_bb)))
 	    break;
 	  e = find_fallthru_edge (bb->succs);
 	  if (! e)
diff --git a/gcc/lra-lives.c b/gcc/lra-lives.c
index 2839c5c..efc19f2 100644
--- a/gcc/lra-lives.c
+++ b/gcc/lra-lives.c
@@ -1002,7 +1002,8 @@ lra_create_live_ranges (bool all_p)
   for (i = n_blocks_inverted - 1; i >= 0; --i)
     {
       bb = BASIC_BLOCK (post_order_rev_cfg[i]);
-      if (bb == EXIT_BLOCK_PTR || bb == ENTRY_BLOCK_PTR)
+      if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb
+	  == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	continue;
       process_bb_lives (bb, curr_point);
     }
diff --git a/gcc/lra.c b/gcc/lra.c
index 3c8b71d..0deae88 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -2065,8 +2065,8 @@ has_nonexceptional_receiver (void)
     bb->flags &= ~BB_REACHABLE;
 
   /* Place the exit block on our worklist.  */
-  EXIT_BLOCK_PTR->flags |= BB_REACHABLE;
-  *tos++ = EXIT_BLOCK_PTR;
+  EXIT_BLOCK_PTR_FOR_FN (cfun)->flags |= BB_REACHABLE;
+  *tos++ = EXIT_BLOCK_PTR_FOR_FN (cfun);
 
   /* Iterate: find everything reachable from what we've already seen.  */
   while (tos != worklist)
diff --git a/gcc/lto-streamer-in.c b/gcc/lto-streamer-in.c
index 7b9f4ca..de25925 100644
--- a/gcc/lto-streamer-in.c
+++ b/gcc/lto-streamer-in.c
@@ -659,7 +659,7 @@ input_cfg (struct lto_input_block *ib, struct function *fn,
       index = streamer_read_hwi (ib);
     }
 
-  p_bb = ENTRY_BLOCK_PTR_FOR_FUNCTION (fn);
+  p_bb = ENTRY_BLOCK_PTR_FOR_FN (fn);
   index = streamer_read_hwi (ib);
   while (index != -1)
     {
@@ -996,7 +996,7 @@ input_function (tree fn_decl, struct data_in *data_in,
      of a gimple body is used by the cgraph routines, but we should
      really use the presence of the CFG.  */
   {
-    edge_iterator ei = ei_start (ENTRY_BLOCK_PTR->succs);
+    edge_iterator ei = ei_start (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs);
     gimple_set_body (fn_decl, bb_seq (ei_edge (ei)->dest));
   }
 
diff --git a/gcc/lto-streamer-out.c b/gcc/lto-streamer-out.c
index 5e264fc..6f1585a 100644
--- a/gcc/lto-streamer-out.c
+++ b/gcc/lto-streamer-out.c
@@ -1594,7 +1594,7 @@ output_cfg (struct output_block *ob, struct function *fn)
 
   streamer_write_hwi (ob, -1);
 
-  bb = ENTRY_BLOCK_PTR;
+  bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
   while (bb->next_bb)
     {
       streamer_write_hwi (ob, bb->next_bb->index);
diff --git a/gcc/mcf.c b/gcc/mcf.c
index 45adda3..e709f2a 100644
--- a/gcc/mcf.c
+++ b/gcc/mcf.c
@@ -508,7 +508,7 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
 
   /* Compute constants b, k_pos, k_neg used in the cost function calculation.
      b = sqrt(avg_vertex_weight(cfg)); k_pos = b; k_neg = 50b.  */
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     total_vertex_weight += bb->count;
 
   sqrt_avg_vertex_weight = mcf_sqrt (total_vertex_weight /
@@ -523,7 +523,7 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
   if (dump_file)
     fprintf (dump_file, "\nVertex transformation:\n");
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
   {
     /* v'->v'': index1->(index1+1).  */
     i = 2 * bb->index;
@@ -1125,7 +1125,8 @@ adjust_cfg_counts (fixup_graph_type *fixup_graph)
   if (dump_file)
     fprintf (dump_file, "\nadjust_cfg_counts():\n");
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       i = 2 * bb->index;
 
@@ -1238,8 +1239,10 @@ adjust_cfg_counts (fixup_graph_type *fixup_graph)
         }
     }
 
-  ENTRY_BLOCK_PTR->count = sum_edge_counts (ENTRY_BLOCK_PTR->succs);
-  EXIT_BLOCK_PTR->count = sum_edge_counts (EXIT_BLOCK_PTR->preds);
+  ENTRY_BLOCK_PTR_FOR_FN (cfun)->count =
+		     sum_edge_counts (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs);
+  EXIT_BLOCK_PTR_FOR_FN (cfun)->count =
+		     sum_edge_counts (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds);
 
   /* Compute edge probabilities.  */
   FOR_ALL_BB (bb)
diff --git a/gcc/mode-switching.c b/gcc/mode-switching.c
index d54f32c..ed45094 100644
--- a/gcc/mode-switching.c
+++ b/gcc/mode-switching.c
@@ -211,7 +211,7 @@ create_pre_exit (int n_entities, int *entity_map, const int *num_modes)
      fallthrough edge; there can be at most one, but there could be
      none at all, e.g. when exit is called.  */
   pre_exit = 0;
-  FOR_EACH_EDGE (eg, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (eg, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     if (eg->flags & EDGE_FALLTHRU)
       {
 	basic_block src_bb = eg->src;
@@ -221,7 +221,7 @@ create_pre_exit (int n_entities, int *entity_map, const int *num_modes)
 	/* If this function returns a value at the end, we have to
 	   insert the final mode switch before the return value copy
 	   to its hard register.  */
-	if (EDGE_COUNT (EXIT_BLOCK_PTR->preds) == 1
+	if (EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) == 1
 	    && NONJUMP_INSN_P ((last_insn = BB_END (src_bb)))
 	    && GET_CODE (PATTERN (last_insn)) == USE
 	    && GET_CODE ((ret_reg = XEXP (PATTERN (last_insn), 0))) == REG)
@@ -492,7 +492,7 @@ optimize_mode_switching (void)
 #if defined (MODE_ENTRY) && defined (MODE_EXIT)
   /* Split the edge from the entry block, so that we can note that
      there NORMAL_MODE is supplied.  */
-  post_entry = split_edge (single_succ_edge (ENTRY_BLOCK_PTR));
+  post_entry = split_edge (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
   pre_exit = create_pre_exit (n_entities, entity_map, num_modes);
 #endif
 
diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
index 1f2a014..f313044 100644
--- a/gcc/modulo-sched.c
+++ b/gcc/modulo-sched.c
@@ -1308,7 +1308,7 @@ canon_loop (struct loop *loop)
 
   /* Avoid annoying special cases of edges going to exit
      block.  */
-  FOR_EACH_EDGE (e, i, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, i, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     if ((e->flags & EDGE_FALLTHRU) && (EDGE_COUNT (e->src->succs) > 1))
       split_edge (e);
 
@@ -3344,7 +3344,7 @@ rest_of_handle_sms (void)
 
   /* Finalize layout changes.  */
   FOR_EACH_BB (bb)
-    if (bb->next_bb != EXIT_BLOCK_PTR)
+    if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
       bb->aux = bb->next_bb;
   free_dominance_info (CDI_DOMINATORS);
   cfg_layout_finalize ();
diff --git a/gcc/omp-low.c b/gcc/omp-low.c
index 783b422..bf834bf 100644
--- a/gcc/omp-low.c
+++ b/gcc/omp-low.c
@@ -8235,7 +8235,7 @@ build_omp_regions (void)
 {
   gcc_assert (root_omp_region == NULL);
   calculate_dominance_info (CDI_DOMINATORS);
-  build_omp_regions_1 (ENTRY_BLOCK_PTR, NULL, false);
+  build_omp_regions_1 (ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, false);
 }
 
 /* Main entry point for expanding OMP-GIMPLE into runtime calls.  */
diff --git a/gcc/postreload-gcse.c b/gcc/postreload-gcse.c
index 941007f..9ce17e5 100644
--- a/gcc/postreload-gcse.c
+++ b/gcc/postreload-gcse.c
@@ -1158,12 +1158,12 @@ eliminate_partially_redundant_loads (void)
 
   /* Note we start at block 1.  */
 
-  if (ENTRY_BLOCK_PTR->next_bb == EXIT_BLOCK_PTR)
+  if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return;
 
   FOR_BB_BETWEEN (bb,
-		  ENTRY_BLOCK_PTR->next_bb->next_bb,
-		  EXIT_BLOCK_PTR,
+		  ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb->next_bb,
+		  EXIT_BLOCK_PTR_FOR_FN (cfun),
 		  next_bb)
     {
       /* Don't try anything on basic blocks with strange predecessors.  */
diff --git a/gcc/predict.c b/gcc/predict.c
index e22c96c..919dbe9 100644
--- a/gcc/predict.c
+++ b/gcc/predict.c
@@ -129,11 +129,11 @@ maybe_hot_frequency_p (struct function *fun, int freq)
   if (profile_status_for_function (fun) == PROFILE_ABSENT)
     return true;
   if (node->frequency == NODE_FREQUENCY_EXECUTED_ONCE
-      && freq < (ENTRY_BLOCK_PTR_FOR_FUNCTION (fun)->frequency * 2 / 3))
+      && freq < (ENTRY_BLOCK_PTR_FOR_FN (fun)->frequency * 2 / 3))
     return false;
   if (PARAM_VALUE (HOT_BB_FREQUENCY_FRACTION) == 0)
     return false;
-  if (freq < (ENTRY_BLOCK_PTR_FOR_FUNCTION (fun)->frequency
+  if (freq < (ENTRY_BLOCK_PTR_FOR_FN (fun)->frequency
 	      / PARAM_VALUE (HOT_BB_FREQUENCY_FRACTION)))
     return false;
   return true;
@@ -251,24 +251,27 @@ probably_never_executed (struct function *fun,
 	return false;
       if (!frequency)
 	return true;
-      if (!ENTRY_BLOCK_PTR->frequency)
+      if (!ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency)
 	return false;
-      if (ENTRY_BLOCK_PTR->count)
+      if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->count)
 	{
           gcov_type computed_count;
           /* Check for possibility of overflow, in which case entry bb count
              is large enough to do the division first without losing much
              precision.  */
-          if (ENTRY_BLOCK_PTR->count < REG_BR_PROB_BASE * REG_BR_PROB_BASE)
+	  if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->count < REG_BR_PROB_BASE *
+	      REG_BR_PROB_BASE)
             {
               gcov_type scaled_count
-                  = frequency * ENTRY_BLOCK_PTR->count * unlikely_count_fraction;
-              computed_count = RDIV (scaled_count, ENTRY_BLOCK_PTR->frequency);
+		  = frequency * ENTRY_BLOCK_PTR_FOR_FN (cfun)->count *
+	     unlikely_count_fraction;
+	      computed_count = RDIV (scaled_count,
+				     ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency);
             }
           else
             {
-              computed_count = RDIV (ENTRY_BLOCK_PTR->count,
-                                     ENTRY_BLOCK_PTR->frequency);
+	      computed_count = RDIV (ENTRY_BLOCK_PTR_FOR_FN (cfun)->count,
+				     ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency);
               computed_count *= frequency * unlikely_count_fraction;
             }
           if (computed_count >= profile_info->runs)
@@ -613,7 +616,8 @@ void
 gimple_predict_edge (edge e, enum br_predictor predictor, int probability)
 {
   gcc_assert (profile_status != PROFILE_GUESSED);
-  if ((e->src != ENTRY_BLOCK_PTR && EDGE_COUNT (e->src->succs) > 1)
+  if ((e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun) && EDGE_COUNT (e->src->succs) >
+       1)
       && flag_guess_branch_prob && optimize)
     {
       struct edge_prediction *i = XNEW (struct edge_prediction);
@@ -2170,7 +2174,7 @@ apply_return_prediction (void)
   enum prediction direction;
   edge_iterator ei;
 
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     {
       return_stmt = last_stmt (e->src);
       if (return_stmt
@@ -2218,7 +2222,7 @@ tree_bb_level_predictions (void)
   edge e;
   edge_iterator ei;
 
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     if (!(e->flags & (EDGE_ABNORMAL | EDGE_FAKE | EDGE_EH)))
       {
         has_return_edges = true;
@@ -2286,7 +2290,7 @@ tree_estimate_probability_bb (basic_block bb)
   FOR_EACH_EDGE (e, ei, bb->succs)
     {
       /* Predict edges to user labels with attributes.  */
-      if (e->dest != EXIT_BLOCK_PTR)
+      if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  gimple_stmt_iterator gi;
 	  for (gi = gsi_start_bb (e->dest); !gsi_end_p (gi); gsi_next (&gi))
@@ -2324,9 +2328,9 @@ tree_estimate_probability_bb (basic_block bb)
 	 return_block:
 	 return_stmt.  */
       if (e->dest != bb->next_bb
-	  && e->dest != EXIT_BLOCK_PTR
+	  && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && single_succ_p (e->dest)
-	  && single_succ_edge (e->dest)->dest == EXIT_BLOCK_PTR
+	  && single_succ_edge (e->dest)->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && (last = last_stmt (e->dest)) != NULL
 	  && gimple_code (last) == GIMPLE_RETURN)
 	{
@@ -2350,7 +2354,7 @@ tree_estimate_probability_bb (basic_block bb)
 
       /* Look for block we are guarding (ie we dominate it,
 	 but it doesn't postdominate us).  */
-      if (e->dest != EXIT_BLOCK_PTR && e->dest != bb
+      if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun) && e->dest != bb
 	  && dominated_by_p (CDI_DOMINATORS, e->dest, e->src)
 	  && !dominated_by_p (CDI_POST_DOMINATORS, e->src, e->dest))
 	{
@@ -2612,7 +2616,7 @@ propagate_freq (basic_block head, bitmap tovisit)
 	}
       BLOCK_INFO (bb)->npredecessors = count;
       /* When function never returns, we will never process exit block.  */
-      if (!count && bb == EXIT_BLOCK_PTR)
+      if (!count && bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	bb->count = bb->frequency = 0;
     }
 
@@ -2762,7 +2766,7 @@ estimate_loops (void)
     {
       bitmap_set_bit (tovisit, bb->index);
     }
-  propagate_freq (ENTRY_BLOCK_PTR, tovisit);
+  propagate_freq (ENTRY_BLOCK_PTR_FOR_FN (cfun), tovisit);
   BITMAP_FREE (tovisit);
 }
 
@@ -2892,14 +2896,14 @@ counts_to_freqs (void)
   /* Don't overwrite the estimated frequencies when the profile for
      the function is missing.  We may drop this function PROFILE_GUESSED
      later in drop_profile ().  */
-  if (!ENTRY_BLOCK_PTR->count)
+  if (!ENTRY_BLOCK_PTR_FOR_FN (cfun)->count)
     return 0;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     true_count_max = MAX (bb->count, true_count_max);
 
   count_max = MAX (true_count_max, 1);
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     bb->frequency = (bb->count * BB_FREQ_MAX + count_max / 2) / count_max;
 
   return true_count_max;
@@ -2924,11 +2928,11 @@ expensive_function_p (int threshold)
   /* Frequencies are out of range.  This either means that function contains
      internal loop executing more than BB_FREQ_MAX times or profile feedback
      is available and function has not been executed at all.  */
-  if (ENTRY_BLOCK_PTR->frequency == 0)
+  if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency == 0)
     return true;
 
   /* Maximally BB_FREQ_MAX^2 so overflow won't happen.  */
-  limit = ENTRY_BLOCK_PTR->frequency * threshold;
+  limit = ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency * threshold;
   FOR_EACH_BB (bb)
     {
       rtx insn;
@@ -2973,12 +2977,13 @@ estimate_bb_frequencies (bool force)
 
       mark_dfs_back_edges ();
 
-      single_succ_edge (ENTRY_BLOCK_PTR)->probability = REG_BR_PROB_BASE;
+      single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun))->probability =
+	 REG_BR_PROB_BASE;
 
       /* Set up block info for each basic block.  */
       alloc_aux_for_blocks (sizeof (struct block_info_def));
       alloc_aux_for_edges (sizeof (struct edge_info_def));
-      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
 	{
 	  edge e;
 	  edge_iterator ei;
@@ -3002,7 +3007,7 @@ estimate_bb_frequencies (bool force)
 	  memcpy (&freq_max, &BLOCK_INFO (bb)->frequency, sizeof (freq_max));
 
       sreal_div (&freq_max, &real_bb_freq_max, &freq_max);
-      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
 	{
 	  sreal tmp;
 
@@ -3186,7 +3191,7 @@ rebuild_frequencies (void)
      max counts.  */
   gcov_type count_max = 0;
   basic_block bb;
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     count_max = MAX (bb->count, count_max);
 
   if (profile_status == PROFILE_GUESSED
diff --git a/gcc/profile.c b/gcc/profile.c
index 1f1c265..85671b3 100644
--- a/gcc/profile.c
+++ b/gcc/profile.c
@@ -117,7 +117,7 @@ instrument_edges (struct edge_list *el)
   int num_edges = NUM_EDGES (el);
   basic_block bb;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     {
       edge e;
       edge_iterator ei;
@@ -192,7 +192,8 @@ instrument_values (histogram_values values)
 
   case HIST_TYPE_TIME_PROFILE:
     {
-      basic_block bb = split_edge (single_succ_edge (ENTRY_BLOCK_PTR));
+      basic_block bb =
+     split_edge (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
       gimple_stmt_iterator gsi = gsi_start_bb (bb);
 
       gimple_gen_time_profiler (t, 0, gsi);
@@ -272,7 +273,7 @@ get_exec_counts (unsigned cfg_checksum, unsigned lineno_checksum)
   gcov_type *counts;
 
   /* Count the edges to be (possibly) instrumented.  */
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     {
       edge e;
       edge_iterator ei;
@@ -332,7 +333,7 @@ correct_negative_edge_counts (void)
   edge e;
   edge_iterator ei;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     {
       FOR_EACH_EDGE (e, ei, bb->succs)
         {
@@ -383,7 +384,8 @@ is_inconsistent (void)
 	  inconsistent = true;
 	}
       if (bb->count != sum_edge_counts (bb->succs) &&
-          ! (find_edge (bb, EXIT_BLOCK_PTR) != NULL && block_ends_with_call_p (bb)))
+	  ! (find_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun)) != NULL
+	     && block_ends_with_call_p (bb)))
 	{
 	  if (dump_file)
 	    {
@@ -408,7 +410,7 @@ static void
 set_bb_counts (void)
 {
   basic_block bb;
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     {
       bb->count = sum_edge_counts (bb->succs);
       gcc_assert (bb->count >= 0);
@@ -427,7 +429,7 @@ read_profile_edge_counts (gcov_type *exec_counts)
   /* The first count in the .da file is the number of times that the function
      was entered.  This is the exec_count for block zero.  */
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     {
       edge e;
       edge_iterator ei;
@@ -491,7 +493,7 @@ compute_frequency_overlap (void)
   int overlap = 0;
   basic_block bb;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     {
       count_total += bb->count;
       freq_total += bb->frequency;
@@ -500,7 +502,7 @@ compute_frequency_overlap (void)
   if (count_total == 0 || freq_total == 0)
     return 0;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     overlap += MIN (bb->count * OVERLAP_BASE / count_total,
 		    bb->frequency * OVERLAP_BASE / freq_total);
 
@@ -537,7 +539,7 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
 
   /* Attach extra info block to each bb.  */
   alloc_aux_for_blocks (sizeof (struct bb_info));
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     {
       edge e;
       edge_iterator ei;
@@ -551,8 +553,8 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
     }
 
   /* Avoid predicting entry on exit nodes.  */
-  BB_INFO (EXIT_BLOCK_PTR)->succ_count = 2;
-  BB_INFO (ENTRY_BLOCK_PTR)->pred_count = 2;
+  BB_INFO (EXIT_BLOCK_PTR_FOR_FN (cfun))->succ_count = 2;
+  BB_INFO (ENTRY_BLOCK_PTR_FOR_FN (cfun))->pred_count = 2;
 
   num_edges = read_profile_edge_counts (exec_counts);
 
@@ -582,7 +584,7 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
     {
       passes++;
       changes = 0;
-      FOR_BB_BETWEEN (bb, EXIT_BLOCK_PTR, NULL, prev_bb)
+      FOR_BB_BETWEEN (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), NULL, prev_bb)
 	{
 	  struct bb_info *bi = BB_INFO (bb);
 	  if (! bi->count_valid)
@@ -724,7 +726,7 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
     hist_br_prob[i] = 0;
   num_branches = 0;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     {
       edge e;
       edge_iterator ei;
@@ -743,9 +745,9 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
 	     already present.  We get negative frequency from the entry
 	     point.  */
 	  if ((e->count < 0
-	       && e->dest == EXIT_BLOCK_PTR)
+	       && e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	      || (e->count > bb->count
-		  && e->dest != EXIT_BLOCK_PTR))
+		  && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)))
 	    {
 	      if (block_ends_with_call_p (bb))
 		e->count = e->count < 0 ? 0 : bb->count;
@@ -1064,17 +1066,17 @@ branch_prob (void)
 	      ne->goto_locus = e->goto_locus;
 	    }
 	  if ((e->flags & (EDGE_ABNORMAL | EDGE_ABNORMAL_CALL))
-	       && e->dest != EXIT_BLOCK_PTR)
+	       && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    need_exit_edge = 1;
-	  if (e->dest == EXIT_BLOCK_PTR)
+	  if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    have_exit_edge = 1;
 	}
       FOR_EACH_EDGE (e, ei, bb->preds)
 	{
 	  if ((e->flags & (EDGE_ABNORMAL | EDGE_ABNORMAL_CALL))
-	       && e->src != ENTRY_BLOCK_PTR)
+	       && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    need_entry_edge = 1;
-	  if (e->src == ENTRY_BLOCK_PTR)
+	  if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    have_entry_edge = 1;
 	}
 
@@ -1083,14 +1085,14 @@ branch_prob (void)
 	  if (dump_file)
 	    fprintf (dump_file, "Adding fake exit edge to bb %i\n",
 		     bb->index);
-	  make_edge (bb, EXIT_BLOCK_PTR, EDGE_FAKE);
+	  make_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FAKE);
 	}
       if (need_entry_edge && !have_entry_edge)
 	{
 	  if (dump_file)
 	    fprintf (dump_file, "Adding fake entry edge to bb %i\n",
 		     bb->index);
-	  make_edge (ENTRY_BLOCK_PTR, bb, EDGE_FAKE);
+	  make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), bb, EDGE_FAKE);
 	  /* Avoid bbs that have both fake entry edge and also some
 	     exit edge.  One of those edges wouldn't be added to the
 	     spanning tree, but we can't instrument any of them.  */
@@ -1146,7 +1148,8 @@ branch_prob (void)
 
       /* Mark edges we've replaced by fake edges above as ignored.  */
       if ((e->flags & (EDGE_ABNORMAL | EDGE_ABNORMAL_CALL))
-	  && e->src != ENTRY_BLOCK_PTR && e->dest != EXIT_BLOCK_PTR)
+	  && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	  && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  EDGE_INFO (e)->ignore = 1;
 	  ignored_edges++;
@@ -1213,7 +1216,8 @@ branch_prob (void)
       gcov_write_length (offset);
 
       /* Arcs */
-      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+      FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		      EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
 	{
 	  edge e;
 	  edge_iterator ei;
@@ -1257,7 +1261,7 @@ branch_prob (void)
 	  gimple_stmt_iterator gsi;
 	  gcov_position_t offset = 0;
 
-	  if (bb == ENTRY_BLOCK_PTR->next_bb)
+	  if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb)
 	    {
 	      expanded_location curr_location =
 		expand_location (DECL_SOURCE_LOCATION (current_function_decl));
@@ -1381,11 +1385,11 @@ find_spanning_tree (struct edge_list *el)
   basic_block bb;
 
   /* We use aux field for standard union-find algorithm.  */
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     bb->aux = bb;
 
   /* Add fake edge exit to entry we can't instrument.  */
-  union_groups (EXIT_BLOCK_PTR, ENTRY_BLOCK_PTR);
+  union_groups (EXIT_BLOCK_PTR_FOR_FN (cfun), ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
   /* First add all abnormal edges to the tree unless they form a cycle. Also
      add all edges to EXIT_BLOCK_PTR to avoid inserting profiling code behind
@@ -1394,7 +1398,7 @@ find_spanning_tree (struct edge_list *el)
     {
       edge e = INDEX_EDGE (el, i);
       if (((e->flags & (EDGE_ABNORMAL | EDGE_ABNORMAL_CALL | EDGE_FAKE))
-	   || e->dest == EXIT_BLOCK_PTR)
+	   || e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	  && !EDGE_INFO (e)->ignore
 	  && (find_group (e->src) != find_group (e->dest)))
 	{
diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c
index 756d3bd..6aad466 100644
--- a/gcc/reg-stack.c
+++ b/gcc/reg-stack.c
@@ -2649,7 +2649,7 @@ convert_regs_entry (void)
      Note that we are inserting converted code here.  This code is
      never seen by the convert_regs pass.  */
 
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     {
       basic_block block = e->dest;
       block_info bi = BLOCK_INFO (block);
@@ -2693,7 +2693,7 @@ convert_regs_exit (void)
       value_reg_high = END_HARD_REGNO (retvalue) - 1;
     }
 
-  output_stack = &BLOCK_INFO (EXIT_BLOCK_PTR)->stack_in;
+  output_stack = &BLOCK_INFO (EXIT_BLOCK_PTR_FOR_FN (cfun))->stack_in;
   if (value_reg_low == -1)
     output_stack->top = -1;
   else
@@ -2847,7 +2847,7 @@ compensate_edges (void)
   starting_stack_p = false;
 
   FOR_EACH_BB (bb)
-    if (bb != ENTRY_BLOCK_PTR)
+    if (bb != ENTRY_BLOCK_PTR_FOR_FN (cfun))
       {
         edge e;
         edge_iterator ei;
@@ -3141,14 +3141,14 @@ convert_regs (void)
 
   /* Construct the desired stack for function exit.  */
   convert_regs_exit ();
-  BLOCK_INFO (EXIT_BLOCK_PTR)->done = 1;
+  BLOCK_INFO (EXIT_BLOCK_PTR_FOR_FN (cfun))->done = 1;
 
   /* ??? Future: process inner loops first, and give them arbitrary
      initial stacks which emit_swap_insn can modify.  This ought to
      prevent double fxch that often appears at the head of a loop.  */
 
   /* Process all blocks reachable from all entry points.  */
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     cfg_altered |= convert_regs_2 (e->dest);
 
   /* ??? Process all unreachable blocks.  Though there's no excuse
@@ -3221,7 +3221,7 @@ reg_to_stack (void)
 
       FOR_EACH_EDGE (e, ei, bb->preds)
 	if (!(e->flags & EDGE_DFS_BACK)
-	    && e->src != ENTRY_BLOCK_PTR)
+	    && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	  bi->predecessors++;
 
       /* Set current register status at last instruction `uninitialized'.  */
diff --git a/gcc/regs.h b/gcc/regs.h
index b5fa3f39..9bf426c 100644
--- a/gcc/regs.h
+++ b/gcc/regs.h
@@ -137,7 +137,7 @@ extern size_t reg_info_p_size;
    frequency.  */
 #define REG_FREQ_FROM_BB(bb) (optimize_size				      \
 			      || (flag_branch_probabilities		      \
-				  && !ENTRY_BLOCK_PTR->count)		      \
+				  && !ENTRY_BLOCK_PTR_FOR_FN (cfun)->count)   \
 			      ? REG_FREQ_MAX				      \
 			      : ((bb)->frequency * REG_FREQ_MAX / BB_FREQ_MAX)\
 			      ? ((bb)->frequency * REG_FREQ_MAX / BB_FREQ_MAX)\
diff --git a/gcc/reload.c b/gcc/reload.c
index b69660d..96619f6 100644
--- a/gcc/reload.c
+++ b/gcc/reload.c
@@ -1615,7 +1615,7 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
 	    && reg_mentioned_p (XEXP (note, 0), in)
 	    /* Check that a former pseudo is valid; see find_dummy_reload.  */
 	    && (ORIGINAL_REGNO (XEXP (note, 0)) < FIRST_PSEUDO_REGISTER
-		|| (! bitmap_bit_p (DF_LR_OUT (ENTRY_BLOCK_PTR),
+		|| (! bitmap_bit_p (DF_LR_OUT (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
 				    ORIGINAL_REGNO (XEXP (note, 0)))
 		    && hard_regno_nregs[regno][GET_MODE (XEXP (note, 0))] == 1))
 	    && ! refers_to_regno_for_reload_p (regno,
@@ -1939,7 +1939,7 @@ combine_reloads (void)
 	&& !fixed_regs[regno]
 	/* Check that a former pseudo is valid; see find_dummy_reload.  */
 	&& (ORIGINAL_REGNO (XEXP (note, 0)) < FIRST_PSEUDO_REGISTER
-	    || (!bitmap_bit_p (DF_LR_OUT (ENTRY_BLOCK_PTR),
+	    || (!bitmap_bit_p (DF_LR_OUT (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
 			       ORIGINAL_REGNO (XEXP (note, 0)))
 		&& hard_regno_nregs[regno][GET_MODE (XEXP (note, 0))] == 1)))
       {
@@ -2098,7 +2098,7 @@ find_dummy_reload (rtx real_in, rtx real_out, rtx *inloc, rtx *outloc,
 	     can ignore the conflict).  We must never introduce writes
 	     to such hardregs, as they would clobber the other live
 	     pseudo.  See PR 20973.  */
-          || (!bitmap_bit_p (DF_LR_OUT (ENTRY_BLOCK_PTR),
+	  || (!bitmap_bit_p (DF_LR_OUT (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
 			     ORIGINAL_REGNO (in))
 	      /* Similarly, only do this if we can be sure that the death
 		 note is still valid.  global can assign some hardreg to
diff --git a/gcc/reload1.c b/gcc/reload1.c
index 66b5ff1..6864ec1 100644
--- a/gcc/reload1.c
+++ b/gcc/reload1.c
@@ -617,8 +617,8 @@ has_nonexceptional_receiver (void)
     bb->flags &= ~BB_REACHABLE;
 
   /* Place the exit block on our worklist.  */
-  EXIT_BLOCK_PTR->flags |= BB_REACHABLE;
-  *tos++ = EXIT_BLOCK_PTR;
+  EXIT_BLOCK_PTR_FOR_FN (cfun)->flags |= BB_REACHABLE;
+  *tos++ = EXIT_BLOCK_PTR_FOR_FN (cfun);
 
   /* Iterate: find everything reachable from what we've already seen.  */
   while (tos != worklist)
diff --git a/gcc/resource.c b/gcc/resource.c
index 3671812..4609c3a 100644
--- a/gcc/resource.c
+++ b/gcc/resource.c
@@ -147,7 +147,7 @@ find_basic_block (rtx insn, int search_limit)
 
   /* The start of the function.  */
   else if (insn == 0)
-    return ENTRY_BLOCK_PTR->next_bb->index;
+    return ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb->index;
 
   /* See if any of the upcoming CODE_LABELs start a basic block.  If we reach
      anything other than a CODE_LABEL or note, we can't find this code.  */
@@ -966,7 +966,7 @@ mark_target_live_regs (rtx insns, rtx target, struct resources *res)
 
       /* Get starting and ending insn, handling the case where each might
 	 be a SEQUENCE.  */
-      start_insn = (b == ENTRY_BLOCK_PTR->next_bb->index ?
+      start_insn = (b == ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb->index ?
 		    insns : BB_HEAD (BASIC_BLOCK (b)));
       stop_insn = target;
 
diff --git a/gcc/sched-ebb.c b/gcc/sched-ebb.c
index 8d23e33..955501a 100644
--- a/gcc/sched-ebb.c
+++ b/gcc/sched-ebb.c
@@ -648,7 +648,7 @@ schedule_ebbs (void)
 	{
 	  edge e;
 	  tail = BB_END (bb);
-	  if (bb->next_bb == EXIT_BLOCK_PTR
+	  if (bb->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	      || LABEL_P (BB_HEAD (bb->next_bb)))
 	    break;
 	  e = find_fallthru_edge (bb->succs);
@@ -683,7 +683,7 @@ ebb_add_block (basic_block bb, basic_block after)
   /* Recovery blocks are always bounded by BARRIERS,
      therefore, they always form single block EBB,
      therefore, we can use rec->index to identify such EBBs.  */
-  if (after == EXIT_BLOCK_PTR)
+  if (after == EXIT_BLOCK_PTR_FOR_FN (cfun))
     bitmap_set_bit (&dont_calc_deps, bb->index);
   else if (after == last_bb)
     last_bb = bb;
diff --git a/gcc/sched-int.h b/gcc/sched-int.h
index 33112ee..070404c 100644
--- a/gcc/sched-int.h
+++ b/gcc/sched-int.h
@@ -945,14 +945,15 @@ extern vec<haifa_deps_insn_data_def> h_d_i_d;
 /* INSN is a speculation check that will simply reexecute the speculatively
    scheduled instruction if the speculation fails.  */
 #define IS_SPECULATION_SIMPLE_CHECK_P(INSN) \
-  (RECOVERY_BLOCK (INSN) == EXIT_BLOCK_PTR)
+  (RECOVERY_BLOCK (INSN) == EXIT_BLOCK_PTR_FOR_FN (cfun))
 
 /* INSN is a speculation check that will branch to RECOVERY_BLOCK if the
    speculation fails.  Insns in that block will reexecute the speculatively
    scheduled code and then will return immediately after INSN thus preserving
    semantics of the program.  */
 #define IS_SPECULATION_BRANCHY_CHECK_P(INSN) \
-  (RECOVERY_BLOCK (INSN) != NULL && RECOVERY_BLOCK (INSN) != EXIT_BLOCK_PTR)
+  (RECOVERY_BLOCK (INSN) != NULL             \
+   && RECOVERY_BLOCK (INSN) != EXIT_BLOCK_PTR_FOR_FN (cfun))
 
 \f
 /* Dep status (aka ds_t) of the link encapsulates all information for a given
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index 87042dd..1663e2f 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -495,7 +495,7 @@ find_single_block_region (bool ebbs_p)
             BLOCK_TO_BB (bb->index) = i - RGN_BLOCKS (nr_regions);
             i++;
 
-            if (bb->next_bb == EXIT_BLOCK_PTR
+	    if (bb->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
                 || LABEL_P (BB_HEAD (bb->next_bb)))
               break;
 
@@ -665,7 +665,7 @@ haifa_find_rgns (void)
 
   /* DFS traversal to find inner loops in the cfg.  */
 
-  current_edge = ei_start (single_succ (ENTRY_BLOCK_PTR)->succs);
+  current_edge = ei_start (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun))->succs);
   sp = -1;
 
   while (1)
@@ -840,7 +840,7 @@ haifa_find_rgns (void)
 	      /* If we exited the loop early, then I is the header of
 		 a non-reducible loop and we should quit processing it
 		 now.  */
-	      if (jbb != EXIT_BLOCK_PTR)
+	      if (jbb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 		continue;
 
 	      /* I is a header of an inner loop, or block 0 in a subroutine
@@ -858,7 +858,7 @@ haifa_find_rgns (void)
 	      /* Decrease degree of all I's successors for topological
 		 ordering.  */
 	      FOR_EACH_EDGE (e, ei, bb->succs)
-		if (e->dest != EXIT_BLOCK_PTR)
+		if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 		  --degree[e->dest->index];
 
 	      /* Estimate # insns, and count # blocks in the region.  */
@@ -875,7 +875,7 @@ haifa_find_rgns (void)
 		    /* Leaf nodes have only a single successor which must
 		       be EXIT_BLOCK.  */
 		    if (single_succ_p (jbb)
-			&& single_succ (jbb) == EXIT_BLOCK_PTR)
+			&& single_succ (jbb) == EXIT_BLOCK_PTR_FOR_FN (cfun))
 		      {
 			queue[++tail] = jbb->index;
 			bitmap_set_bit (in_queue, jbb->index);
@@ -893,7 +893,7 @@ haifa_find_rgns (void)
 
 		  FOR_EACH_EDGE (e, ei, bb->preds)
 		    {
-		      if (e->src == ENTRY_BLOCK_PTR)
+		      if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 			continue;
 
 		      node = e->src->index;
@@ -954,7 +954,7 @@ haifa_find_rgns (void)
 
 		      /* See discussion above about nodes not marked as in
 			 this loop during the initial DFS traversal.  */
-		      if (e->src == ENTRY_BLOCK_PTR
+		      if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun)
 			  || max_hdr[node] != loop_head)
 			{
 			  tail = -1;
@@ -1006,7 +1006,7 @@ haifa_find_rgns (void)
 			  queue[head] = queue[tail--];
 
 			  FOR_EACH_EDGE (e, ei, BASIC_BLOCK (child)->succs)
-			    if (e->dest != EXIT_BLOCK_PTR)
+			    if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 			      --degree[e->dest->index];
 			}
 		      else
@@ -1026,7 +1026,7 @@ haifa_find_rgns (void)
 		     This may provide several smaller regions instead
 		     of one too_large region.  */
                   FOR_EACH_EDGE (e, ei, bb->succs)
-                    if (e->dest != EXIT_BLOCK_PTR)
+		    if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
                       bitmap_set_bit (extended_rgn_header, e->dest->index);
                 }
 	    }
@@ -1305,7 +1305,7 @@ extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 	      BLOCK_TO_BB (bbn) = 0;
 
 	      FOR_EACH_EDGE (e, ei, BASIC_BLOCK (bbn)->succs)
-		if (e->dest != EXIT_BLOCK_PTR)
+		if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 		  degree[e->dest->index]--;
 
 	      if (!large)
@@ -1362,7 +1362,7 @@ extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 		      idx++;
 
 		      FOR_EACH_EDGE (e, ei, BASIC_BLOCK (succn)->succs)
-			if (e->dest != EXIT_BLOCK_PTR)
+			if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 			  degree[e->dest->index]--;
 		    }
 		}
@@ -1426,7 +1426,7 @@ compute_dom_prob_ps (int bb)
       edge out_edge;
       edge_iterator out_ei;
 
-      if (in_edge->src == ENTRY_BLOCK_PTR)
+      if (in_edge->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	continue;
 
       pred_bb = BLOCK_TO_BB (in_edge->src->index);
@@ -2663,7 +2663,7 @@ propagate_deps (int bb, struct deps_desc *pred_deps)
   FOR_EACH_EDGE (e, ei, block->succs)
     {
       /* Only bbs "below" bb, in the same region, are interesting.  */
-      if (e->dest == EXIT_BLOCK_PTR
+      if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  || CONTAINING_RGN (block->index) != CONTAINING_RGN (e->dest->index)
 	  || BLOCK_TO_BB (e->dest->index) <= bb)
 	continue;
@@ -3454,10 +3454,11 @@ rgn_add_block (basic_block bb, basic_block after)
   extend_regions ();
   bitmap_set_bit (&not_in_df, bb->index);
 
-  if (after == 0 || after == EXIT_BLOCK_PTR)
+  if (after == 0 || after == EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       rgn_make_new_region_out_of_new_block (bb);
-      RGN_DONT_CALC_DEPS (nr_regions - 1) = (after == EXIT_BLOCK_PTR);
+      RGN_DONT_CALC_DEPS (nr_regions - 1) = (after
+					     == EXIT_BLOCK_PTR_FOR_FN (cfun));
     }
   else
     {
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index 579cf8d..7dfc703 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -3682,7 +3682,7 @@ maybe_tidy_empty_bb (basic_block bb)
      successors.  Otherwise remove it.  */
   if (!sel_bb_empty_p (bb)
       || (single_succ_p (bb)
-          && single_succ (bb) == EXIT_BLOCK_PTR
+	  && single_succ (bb) == EXIT_BLOCK_PTR_FOR_FN (cfun)
           && (!single_pred_p (bb)
               || !(single_pred_edge (bb)->flags & EDGE_FALLTHRU)))
       || EDGE_COUNT (bb->preds) == 0
@@ -3853,7 +3853,7 @@ tidy_control_flow (basic_block xbb, bool full_tidying)
       && EDGE_COUNT (xbb->succs) == 1
       && (EDGE_SUCC (xbb, 0)->flags & EDGE_FALLTHRU)
       /* When successor is an EXIT block, it may not be the next block.  */
-      && single_succ (xbb) != EXIT_BLOCK_PTR
+      && single_succ (xbb) != EXIT_BLOCK_PTR_FOR_FN (cfun)
       /* And unconditional jump in previous basic block leads to
          next basic block of XBB and this jump can be safely removed.  */
       && in_current_region_p (xbb->prev_bb)
@@ -4325,7 +4325,7 @@ init_lv_sets (void)
     init_lv_set (bb);
 
   /* Don't forget EXIT_BLOCK.  */
-  init_lv_set (EXIT_BLOCK_PTR);
+  init_lv_set (EXIT_BLOCK_PTR_FOR_FN (cfun));
 }
 
 /* Release lv set of HEAD.  */
@@ -4346,7 +4346,7 @@ free_lv_sets (void)
   basic_block bb;
 
   /* Don't forget EXIT_BLOCK.  */
-  free_lv_set (EXIT_BLOCK_PTR);
+  free_lv_set (EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   /* Free LV sets.  */
   FOR_EACH_BB (bb)
@@ -4524,7 +4524,7 @@ sel_bb_head (basic_block bb)
 {
   insn_t head;
 
-  if (bb == EXIT_BLOCK_PTR)
+  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       gcc_assert (exit_insn != NULL_RTX);
       head = exit_insn;
@@ -4557,7 +4557,7 @@ sel_bb_end (basic_block bb)
   if (sel_bb_empty_p (bb))
     return NULL_RTX;
 
-  gcc_assert (bb != EXIT_BLOCK_PTR);
+  gcc_assert (bb != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   return BB_END (bb);
 }
@@ -4852,7 +4852,7 @@ bb_ends_ebb_p (basic_block bb)
   basic_block next_bb = bb_next_bb (bb);
   edge e;
 
-  if (next_bb == EXIT_BLOCK_PTR
+  if (next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
       || bitmap_bit_p (forced_ebb_heads, next_bb->index)
       || (LABEL_P (BB_HEAD (next_bb))
 	  /* NB: LABEL_NUSES () is not maintained outside of jump.c.
@@ -5538,7 +5538,7 @@ sel_create_recovery_block (insn_t orig_insn)
 
   recovery_block = sched_create_recovery_block (&before_recovery);
   if (before_recovery)
-    copy_lv_set_from (before_recovery, EXIT_BLOCK_PTR);
+    copy_lv_set_from (before_recovery, EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   gcc_assert (sel_bb_empty_p (recovery_block));
   sched_create_recovery_edges (first_bb, recovery_block, second_bb);
@@ -5821,7 +5821,7 @@ setup_nop_and_exit_insns (void)
   emit_insn (nop_pattern);
   exit_insn = get_insns ();
   end_sequence ();
-  set_block_for_insn (exit_insn, EXIT_BLOCK_PTR);
+  set_block_for_insn (exit_insn, EXIT_BLOCK_PTR_FOR_FN (cfun));
 }
 
 /* Free special insns used in the scheduler.  */
@@ -6396,7 +6396,7 @@ sel_remove_loop_preheader (void)
                  If it is so - delete this jump and clear data sets of its
                  basic block if it becomes empty.  */
 	      if (next_bb->prev_bb == prev_bb
-                  && prev_bb != ENTRY_BLOCK_PTR
+		  && prev_bb != ENTRY_BLOCK_PTR_FOR_FN (cfun)
                   && bb_has_removable_jump_to_p (prev_bb, next_bb))
                 {
                   redirect_edge_and_branch (EDGE_SUCC (prev_bb, 0), next_bb);
diff --git a/gcc/sel-sched-ir.h b/gcc/sel-sched-ir.h
index 486159d..ff99e51 100644
--- a/gcc/sel-sched-ir.h
+++ b/gcc/sel-sched-ir.h
@@ -1024,7 +1024,7 @@ inner_loop_header_p (basic_block bb)
   if (!current_loop_nest)
     return false;
 
-  if (bb == EXIT_BLOCK_PTR)
+  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return false;
 
   inner_loop = bb->loop_father;
@@ -1050,7 +1050,7 @@ get_loop_exit_edges_unique_dests (const struct loop *loop)
   vec<edge> edges = vNULL;
   struct loop_exit *exit;
 
-  gcc_assert (loop->latch != EXIT_BLOCK_PTR
+  gcc_assert (loop->latch != EXIT_BLOCK_PTR_FOR_FN (cfun)
               && current_loops->state & LOOPS_HAVE_RECORDED_EXITS);
 
   for (exit = loop->exits->next; exit->e; exit = exit->next)
@@ -1083,7 +1083,7 @@ sel_bb_empty_or_nop_p (basic_block bb)
   if (!INSN_NOP_P (first))
     return false;
 
-  if (bb == EXIT_BLOCK_PTR)
+  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return false;
 
   last = sel_bb_end (bb);
@@ -1204,7 +1204,7 @@ _succ_iter_start (insn_t *succp, insn_t insn, int flags)
   i.current_exit = -1;
   i.loop_exits.create (0);
 
-  if (bb != EXIT_BLOCK_PTR && BB_END (bb) != insn)
+  if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun) && BB_END (bb) != insn)
     {
       i.bb_end = false;
 
@@ -1308,7 +1308,7 @@ _succ_iter_cond (succ_iterator *ip, rtx *succp, rtx insn,
 	{
 	  basic_block bb = ip->e2->dest;
 
-	  if (bb == EXIT_BLOCK_PTR || bb == after_recovery)
+	  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb == after_recovery)
 	    *succp = exit_insn;
 	  else
 	    {
diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c
index c2d4185..1e3fcf0 100644
--- a/gcc/sel-sched.c
+++ b/gcc/sel-sched.c
@@ -4551,7 +4551,8 @@ find_block_for_bookkeeping (edge e1, edge e2, bool lax)
   edge e;
 
   /* Loop over edges from E1 to E2, inclusive.  */
-  for (e = e1; !lax || e->dest != EXIT_BLOCK_PTR; e = EDGE_SUCC (e->dest, 0))
+  for (e = e1; !lax || e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun); e =
+       EDGE_SUCC (e->dest, 0))
     {
       if (EDGE_COUNT (e->dest->preds) == 2)
 	{
@@ -4642,7 +4643,7 @@ create_block_for_bookkeeping (edge e1, edge e2)
       if (DEBUG_INSN_P (insn)
 	  && single_succ_p (new_bb)
 	  && (succ = single_succ (new_bb))
-	  && succ != EXIT_BLOCK_PTR
+	  && succ != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && DEBUG_INSN_P ((last = sel_bb_end (new_bb))))
 	{
 	  while (insn != last && (DEBUG_INSN_P (insn) || NOTE_P (insn)))
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index ffbeed2..378d6c7 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -805,7 +805,7 @@ insert_store (struct st_expr * expr, edge e)
 
   /* If tmp is NULL, we found an insertion on every edge, blank the
      insertion vector for these edges, and insert at the start of the BB.  */
-  if (!tmp && bb != EXIT_BLOCK_PTR)
+  if (!tmp && bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       FOR_EACH_EDGE (tmp, ei, e->dest->preds)
 	{
@@ -869,7 +869,7 @@ remove_reachable_equiv_notes (basic_block bb, struct st_expr *smexpr)
 	}
       bb = act->dest;
 
-      if (bb == EXIT_BLOCK_PTR
+      if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  || bitmap_bit_p (visited, bb->index))
 	{
 	  if (!ei_end_p (ei))
diff --git a/gcc/trans-mem.c b/gcc/trans-mem.c
index 2486005..271f600 100644
--- a/gcc/trans-mem.c
+++ b/gcc/trans-mem.c
@@ -1950,7 +1950,7 @@ tm_region_init (struct tm_region *region)
   vec<tm_region_p> bb_regions = vNULL;
 
   all_tm_regions = region;
-  bb = single_succ (ENTRY_BLOCK_PTR);
+  bb = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
   /* We could store this information in bb->aux, but we may get called
      through get_all_tm_blocks() from another pass that may be already
@@ -2016,7 +2016,7 @@ gate_tm_init (void)
       struct tm_region *region = (struct tm_region *)
 	obstack_alloc (&tm_obstack.obstack, sizeof (struct tm_region));
       memset (region, 0, sizeof (*region));
-      region->entry_block = single_succ (ENTRY_BLOCK_PTR);
+      region->entry_block = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
       /* For a clone, the entire function is the region.  But even if
 	 we don't need to record any exit blocks, we may need to
 	 record irrevocable blocks.  */
@@ -3633,7 +3633,8 @@ tm_memopt_compute_available (struct tm_region *region,
 	/* If the out state of this block changed, then we need to add
 	   its successors to the worklist if they are not already in.  */
 	FOR_EACH_EDGE (e, ei, bb->succs)
-	  if (!AVAIL_IN_WORKLIST_P (e->dest) && e->dest != EXIT_BLOCK_PTR)
+	  if (!AVAIL_IN_WORKLIST_P (e->dest)
+	      && e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      *qin++ = e->dest;
 	      AVAIL_IN_WORKLIST_P (e->dest) = true;
@@ -4539,12 +4540,14 @@ ipa_tm_scan_irr_function (struct cgraph_node *node, bool for_clone)
   if (for_clone)
     {
       old_irr = d->irrevocable_blocks_clone;
-      queue.quick_push (single_succ (ENTRY_BLOCK_PTR));
+      queue.quick_push (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
       if (ipa_tm_scan_irr_blocks (&queue, new_irr, old_irr, NULL))
 	{
-	  ipa_tm_propagate_irr (single_succ (ENTRY_BLOCK_PTR), new_irr,
+	  ipa_tm_propagate_irr (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
+				new_irr,
 				old_irr, NULL);
-	  ret = bitmap_bit_p (new_irr, single_succ (ENTRY_BLOCK_PTR)->index);
+	  ret = bitmap_bit_p (new_irr,
+			      single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun))->index);
 	}
     }
   else
@@ -5294,7 +5297,8 @@ ipa_tm_transform_clone (struct cgraph_node *node)
   calculate_dominance_info (CDI_DOMINATORS);
 
   need_ssa_rename =
-    ipa_tm_transform_calls (d->clone, NULL, single_succ (ENTRY_BLOCK_PTR),
+    ipa_tm_transform_calls (d->clone, NULL,
+			    single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
 			    d->irrevocable_blocks_clone);
 
   if (need_ssa_rename)
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index d2af39e..b9fb719 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -190,14 +190,14 @@ init_empty_tree_cfg_for_function (struct function *fn)
 			 initial_cfg_capacity);
 
   SET_BASIC_BLOCK_FOR_FUNCTION (fn, ENTRY_BLOCK,
-				ENTRY_BLOCK_PTR_FOR_FUNCTION (fn));
+				ENTRY_BLOCK_PTR_FOR_FN (fn));
   SET_BASIC_BLOCK_FOR_FUNCTION (fn, EXIT_BLOCK,
-		   EXIT_BLOCK_PTR_FOR_FUNCTION (fn));
+		   EXIT_BLOCK_PTR_FOR_FN (fn));
 
-  ENTRY_BLOCK_PTR_FOR_FUNCTION (fn)->next_bb
-    = EXIT_BLOCK_PTR_FOR_FUNCTION (fn);
-  EXIT_BLOCK_PTR_FOR_FUNCTION (fn)->prev_bb
-    = ENTRY_BLOCK_PTR_FOR_FUNCTION (fn);
+  ENTRY_BLOCK_PTR_FOR_FN (fn)->next_bb
+    = EXIT_BLOCK_PTR_FOR_FN (fn);
+  EXIT_BLOCK_PTR_FOR_FN (fn)->prev_bb
+    = ENTRY_BLOCK_PTR_FOR_FN (fn);
 }
 
 void
@@ -236,7 +236,7 @@ build_gimple_cfg (gimple_seq seq)
 
   /* Make sure there is always at least one block, even if it's empty.  */
   if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
-    create_empty_bb (ENTRY_BLOCK_PTR);
+    create_empty_bb (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
   /* Adjust the size of the array.  */
   if (basic_block_info->length () < (size_t) n_basic_blocks_for_fn (cfun))
@@ -518,7 +518,7 @@ make_blocks (gimple_seq seq)
   gimple stmt = NULL;
   bool start_new_block = true;
   bool first_stmt_of_seq = true;
-  basic_block bb = ENTRY_BLOCK_PTR;
+  basic_block bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   while (!gsi_end_p (i))
     {
@@ -669,7 +669,8 @@ make_edges (void)
 
   /* Create an edge from entry to the first block with executable
      statements in it.  */
-  make_edge (ENTRY_BLOCK_PTR, BASIC_BLOCK (NUM_FIXED_BLOCKS), EDGE_FALLTHRU);
+  make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), BASIC_BLOCK (NUM_FIXED_BLOCKS),
+	     EDGE_FALLTHRU);
 
   /* Traverse the basic block array placing edges.  */
   FOR_EACH_BB (bb)
@@ -687,7 +688,7 @@ make_edges (void)
 	      fallthru = false;
 	      break;
 	    case GIMPLE_RETURN:
-	      make_edge (bb, EXIT_BLOCK_PTR, 0);
+	      make_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), 0);
 	      fallthru = false;
 	      break;
 	    case GIMPLE_COND:
@@ -719,7 +720,8 @@ make_edges (void)
 
 	      /* BUILTIN_RETURN is really a return statement.  */
 	      if (gimple_call_builtin_p (last, BUILT_IN_RETURN))
-		make_edge (bb, EXIT_BLOCK_PTR, 0), fallthru = false;
+		make_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), 0), fallthru =
+	     false;
 	      /* Some calls are known not to return.  */
 	      else
 	        fallthru = !(gimple_call_flags (last) & ECF_NORETURN);
@@ -1503,7 +1505,7 @@ gimple_can_merge_blocks_p (basic_block a, basic_block b)
   if (!single_pred_p (b))
     return false;
 
-  if (b == EXIT_BLOCK_PTR)
+  if (b == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return false;
 
   /* If A ends by a statement causing exceptions or something similar, we
@@ -4849,19 +4851,21 @@ gimple_verify_flow_info (void)
   edge e;
   edge_iterator ei;
 
-  if (ENTRY_BLOCK_PTR->il.gimple.seq || ENTRY_BLOCK_PTR->il.gimple.phi_nodes)
+  if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->il.gimple.seq
+      || ENTRY_BLOCK_PTR_FOR_FN (cfun)->il.gimple.phi_nodes)
     {
       error ("ENTRY_BLOCK has IL associated with it");
       err = 1;
     }
 
-  if (EXIT_BLOCK_PTR->il.gimple.seq || EXIT_BLOCK_PTR->il.gimple.phi_nodes)
+  if (EXIT_BLOCK_PTR_FOR_FN (cfun)->il.gimple.seq
+      || EXIT_BLOCK_PTR_FOR_FN (cfun)->il.gimple.phi_nodes)
     {
       error ("EXIT_BLOCK has IL associated with it");
       err = 1;
     }
 
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     if (e->flags & EDGE_FALLTHRU)
       {
 	error ("fallthru to exit from bb %d", e->src->index);
@@ -5041,7 +5045,7 @@ gimple_verify_flow_info (void)
 	      error ("wrong outgoing edge flags at end of bb %d", bb->index);
 	      err = 1;
 	    }
-	  if (single_succ (bb) != EXIT_BLOCK_PTR)
+	  if (single_succ (bb) != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      error ("return edge does not point to exit in bb %d",
 		     bb->index);
@@ -5281,7 +5285,7 @@ gimple_redirect_edge_and_branch (edge e, basic_block dest)
   if (e->flags & EDGE_EH)
     return redirect_eh_edge (e, dest);
 
-  if (e->src != ENTRY_BLOCK_PTR)
+  if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
     {
       ret = gimple_try_redirect_by_replacing_jump (e, dest);
       if (ret)
@@ -5564,7 +5568,7 @@ gimple_duplicate_bb (basic_block bb)
   gimple_seq phis = phi_nodes (bb);
   gimple phi, stmt, copy;
 
-  new_bb = create_empty_bb (EXIT_BLOCK_PTR->prev_bb);
+  new_bb = create_empty_bb (EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb);
 
   /* Copy the PHI nodes.  We ignore PHI node arguments here because
      the incoming edges have not been setup yet.  */
@@ -6901,9 +6905,9 @@ move_sese_region_to_fn (struct function *dest_cfun, basic_block entry_bb,
      FIXME, this is silly.  The CFG ought to become a parameter to
      these helpers.  */
   push_cfun (dest_cfun);
-  make_edge (ENTRY_BLOCK_PTR, entry_bb, EDGE_FALLTHRU);
+  make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), entry_bb, EDGE_FALLTHRU);
   if (exit_bb)
-    make_edge (exit_bb,  EXIT_BLOCK_PTR, 0);
+    make_edge (exit_bb,  EXIT_BLOCK_PTR_FOR_FN (cfun), 0);
   pop_cfun ();
 
   /* Back in the original function, the SESE region has disappeared,
@@ -7247,7 +7251,7 @@ print_loops (FILE *file, int verbosity)
 {
   basic_block bb;
 
-  bb = ENTRY_BLOCK_PTR;
+  bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
   if (bb && bb->loop_father)
     print_loop_and_siblings (file, bb->loop_father, 0, verbosity);
 }
@@ -7416,7 +7420,8 @@ gimple_flow_call_edges_add (sbitmap blocks)
   if (! blocks)
     check_last_block = true;
   else
-    check_last_block = bitmap_bit_p (blocks, EXIT_BLOCK_PTR->prev_bb->index);
+    check_last_block = bitmap_bit_p (blocks,
+				     EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb->index);
 
   /* In the last basic block, before epilogue generation, there will be
      a fallthru edge to EXIT.  Special care is required if the last insn
@@ -7432,7 +7437,7 @@ gimple_flow_call_edges_add (sbitmap blocks)
      Handle this by adding a dummy instruction in a new last basic block.  */
   if (check_last_block)
     {
-      basic_block bb = EXIT_BLOCK_PTR->prev_bb;
+      basic_block bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
       gimple_stmt_iterator gsi = gsi_last_nondebug_bb (bb);
       gimple t = NULL;
 
@@ -7443,7 +7448,7 @@ gimple_flow_call_edges_add (sbitmap blocks)
 	{
 	  edge e;
 
-	  e = find_edge (bb, EXIT_BLOCK_PTR);
+	  e = find_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun));
 	  if (e)
 	    {
 	      gsi_insert_on_edge (e, gimple_build_nop ());
@@ -7486,7 +7491,7 @@ gimple_flow_call_edges_add (sbitmap blocks)
 #ifdef ENABLE_CHECKING
 		  if (stmt == last_stmt)
 		    {
-		      e = find_edge (bb, EXIT_BLOCK_PTR);
+		      e = find_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun));
 		      gcc_assert (e == NULL);
 		    }
 #endif
@@ -7499,7 +7504,7 @@ gimple_flow_call_edges_add (sbitmap blocks)
 		      if (e)
 			blocks_split++;
 		    }
-		  make_edge (bb, EXIT_BLOCK_PTR, EDGE_FAKE);
+		  make_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FAKE);
 		}
 	      gsi_prev (&gsi);
 	    }
@@ -7537,7 +7542,7 @@ remove_edge_and_dominated_blocks (edge e)
     }
 
   /* No updating is needed for edges to exit.  */
-  if (e->dest == EXIT_BLOCK_PTR)
+  if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
     {
       if (cfgcleanup_altered_bbs)
 	bitmap_set_bit (cfgcleanup_altered_bbs, e->src->index);
@@ -7577,7 +7582,7 @@ remove_edge_and_dominated_blocks (edge e)
 	{
 	  FOR_EACH_EDGE (f, ei, bb->succs)
 	    {
-	      if (f->dest != EXIT_BLOCK_PTR)
+	      if (f->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 		bitmap_set_bit (df, f->dest->index);
 	    }
 	}
@@ -7928,8 +7933,8 @@ split_critical_edges (void)
 	     gimple_find_edge_insert_loc.  */
 	  else if ((!single_pred_p (e->dest)
 	            || !gimple_seq_empty_p (phi_nodes (e->dest))
-	            || e->dest == EXIT_BLOCK_PTR)
-		   && e->src != ENTRY_BLOCK_PTR
+		    || e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
+		   && e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	           && !(e->flags & EDGE_ABNORMAL))
 	    {
 	      gimple_stmt_iterator gsi;
@@ -8053,10 +8058,10 @@ execute_warn_function_return (void)
 
   /* If we have a path to EXIT, then we do return.  */
   if (TREE_THIS_VOLATILE (cfun->decl)
-      && EDGE_COUNT (EXIT_BLOCK_PTR->preds) > 0)
+      && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) > 0)
     {
       location = UNKNOWN_LOCATION;
-      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
 	{
 	  last = last_stmt (e->src);
 	  if ((gimple_code (last) == GIMPLE_RETURN
@@ -8073,10 +8078,10 @@ execute_warn_function_return (void)
      without returning a value.  */
   else if (warn_return_type
 	   && !TREE_NO_WARNING (cfun->decl)
-	   && EDGE_COUNT (EXIT_BLOCK_PTR->preds) > 0
+	   && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) > 0
 	   && !VOID_TYPE_P (TREE_TYPE (TREE_TYPE (cfun->decl))))
     {
-      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
 	{
 	  gimple last = last_stmt (e->src);
 	  if (gimple_code (last) == GIMPLE_RETURN
@@ -8293,13 +8298,15 @@ execute_fixup_cfg (void)
 
   count_scale
       = GCOV_COMPUTE_SCALE (cgraph_get_node (current_function_decl)->count,
-                            ENTRY_BLOCK_PTR->count);
+			    ENTRY_BLOCK_PTR_FOR_FN (cfun)->count);
 
-  ENTRY_BLOCK_PTR->count = cgraph_get_node (current_function_decl)->count;
-  EXIT_BLOCK_PTR->count = apply_scale (EXIT_BLOCK_PTR->count,
+  ENTRY_BLOCK_PTR_FOR_FN (cfun)->count =
+			    cgraph_get_node (current_function_decl)->count;
+  EXIT_BLOCK_PTR_FOR_FN (cfun)->count =
+			    apply_scale (EXIT_BLOCK_PTR_FOR_FN (cfun)->count,
                                        count_scale);
 
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     e->count = apply_scale (e->count, count_scale);
 
   FOR_EACH_BB (bb)
diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c
index ec99ed0..4e5adc2 100644
--- a/gcc/tree-cfgcleanup.c
+++ b/gcc/tree-cfgcleanup.c
@@ -251,14 +251,14 @@ tree_forwarder_block_p (basic_block bb, bool phi_wanted)
 	 Otherwise, BB must have PHI nodes.  */
       || gimple_seq_empty_p (phi_nodes (bb)) == phi_wanted
       /* BB may not be a predecessor of EXIT_BLOCK_PTR.  */
-      || single_succ (bb) == EXIT_BLOCK_PTR
+      || single_succ (bb) == EXIT_BLOCK_PTR_FOR_FN (cfun)
       /* Nor should this be an infinite loop.  */
       || single_succ (bb) == bb
       /* BB may not have an abnormal outgoing edge.  */
       || (single_succ_edge (bb)->flags & EDGE_ABNORMAL))
     return false;
 
-  gcc_checking_assert (bb != ENTRY_BLOCK_PTR);
+  gcc_checking_assert (bb != ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
   locus = single_succ_edge (bb)->goto_locus;
 
@@ -268,7 +268,7 @@ tree_forwarder_block_p (basic_block bb, bool phi_wanted)
     edge e;
 
     FOR_EACH_EDGE (e, ei, bb->preds)
-      if (e->src == ENTRY_BLOCK_PTR || (e->flags & EDGE_EH))
+      if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun) || (e->flags & EDGE_EH))
 	return false;
       /* If goto_locus of any of the edges differs, prevent removing
 	 the forwarder block for -O0.  */
diff --git a/gcc/tree-complex.c b/gcc/tree-complex.c
index 05f30e5..7bc3458 100644
--- a/gcc/tree-complex.c
+++ b/gcc/tree-complex.c
@@ -690,7 +690,7 @@ update_complex_assignment (gimple_stmt_iterator *gsi, tree r, tree i)
 static void
 update_parameter_components (void)
 {
-  edge entry_edge = single_succ_edge (ENTRY_BLOCK_PTR);
+  edge entry_edge = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   tree parm;
 
   for (parm = DECL_ARGUMENTS (cfun->decl); parm ; parm = DECL_CHAIN (parm))
diff --git a/gcc/tree-if-conv.c b/gcc/tree-if-conv.c
index dd3925a..907b403 100644
--- a/gcc/tree-if-conv.c
+++ b/gcc/tree-if-conv.c
@@ -918,7 +918,7 @@ get_loop_body_in_if_conv_order (const struct loop *loop)
   unsigned int visited_count = 0;
 
   gcc_assert (loop->num_nodes);
-  gcc_assert (loop->latch != EXIT_BLOCK_PTR);
+  gcc_assert (loop->latch != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   blocks = XCNEWVEC (basic_block, loop->num_nodes);
   visited = BITMAP_ALLOC (NULL);
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index 6ef8bb4..25705a9 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -199,7 +199,7 @@ remap_ssa_name (tree name, copy_body_data *id)
       if (SSA_NAME_IS_DEFAULT_DEF (name)
 	  && TREE_CODE (SSA_NAME_VAR (name)) == PARM_DECL
 	  && id->entry_bb == NULL
-	  && single_succ_p (ENTRY_BLOCK_PTR))
+	  && single_succ_p (ENTRY_BLOCK_PTR_FOR_FN (cfun)))
 	{
 	  tree vexpr = make_node (DEBUG_EXPR_DECL);
 	  gimple def_temp;
@@ -218,7 +218,7 @@ remap_ssa_name (tree name, copy_body_data *id)
 	  DECL_ARTIFICIAL (vexpr) = 1;
 	  TREE_TYPE (vexpr) = TREE_TYPE (name);
 	  DECL_MODE (vexpr) = DECL_MODE (SSA_NAME_VAR (name));
-	  gsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR));
+	  gsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
 	  gsi_insert_before (&gsi, def_temp, GSI_SAME_STMT);
 	  return vexpr;
 	}
@@ -300,7 +300,8 @@ remap_ssa_name (tree name, copy_body_data *id)
 	      && SSA_NAME_OCCURS_IN_ABNORMAL_PHI (name)
 	      && (!SSA_NAME_VAR (name)
 		  || TREE_CODE (SSA_NAME_VAR (name)) != PARM_DECL)
-	      && (id->entry_bb != EDGE_SUCC (ENTRY_BLOCK_PTR, 0)->dest
+	      && (id->entry_bb != EDGE_SUCC (ENTRY_BLOCK_PTR_FOR_FN (cfun),
+					     0)->dest
 		  || EDGE_COUNT (id->entry_bb->preds) != 1))
 	    {
 	      gimple_stmt_iterator gsi = gsi_last_bb (id->entry_bb);
@@ -1978,7 +1979,7 @@ copy_edges_for_bb (basic_block bb, gcov_type count_scale, basic_block ret_bb,
 
 	/* Return edges do get a FALLTHRU flag when the get inlined.  */
 	if (old_edge->dest->index == EXIT_BLOCK && !old_edge->flags
-	    && old_edge->dest->aux != EXIT_BLOCK_PTR)
+	    && old_edge->dest->aux != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	  flags |= EDGE_FALLTHRU;
 	new_edge = make_edge (new_bb, (basic_block) old_edge->dest->aux, flags);
 	new_edge->count = apply_scale (old_edge->count, count_scale);
@@ -2163,10 +2164,10 @@ initialize_cfun (tree new_fndecl, tree callee_fndecl, gcov_type count)
   if (!DECL_RESULT (new_fndecl))
     DECL_RESULT (new_fndecl) = DECL_RESULT (callee_fndecl);
 
-  if (ENTRY_BLOCK_PTR_FOR_FUNCTION (src_cfun)->count)
+  if (ENTRY_BLOCK_PTR_FOR_FN (src_cfun)->count)
     count_scale
         = GCOV_COMPUTE_SCALE (count,
-                              ENTRY_BLOCK_PTR_FOR_FUNCTION (src_cfun)->count);
+                              ENTRY_BLOCK_PTR_FOR_FN (src_cfun)->count);
   else
     count_scale = REG_BR_PROB_BASE;
 
@@ -2202,16 +2203,16 @@ initialize_cfun (tree new_fndecl, tree callee_fndecl, gcov_type count)
   init_empty_tree_cfg ();
 
   profile_status_for_function (cfun) = profile_status_for_function (src_cfun);
-  ENTRY_BLOCK_PTR->count =
-    (ENTRY_BLOCK_PTR_FOR_FUNCTION (src_cfun)->count * count_scale /
+  ENTRY_BLOCK_PTR_FOR_FN (cfun)->count =
+    (ENTRY_BLOCK_PTR_FOR_FN (src_cfun)->count * count_scale /
      REG_BR_PROB_BASE);
-  ENTRY_BLOCK_PTR->frequency
-    = ENTRY_BLOCK_PTR_FOR_FUNCTION (src_cfun)->frequency;
-  EXIT_BLOCK_PTR->count =
-    (EXIT_BLOCK_PTR_FOR_FUNCTION (src_cfun)->count * count_scale /
+  ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency
+    = ENTRY_BLOCK_PTR_FOR_FN (src_cfun)->frequency;
+  EXIT_BLOCK_PTR_FOR_FN (cfun)->count =
+    (EXIT_BLOCK_PTR_FOR_FN (src_cfun)->count * count_scale /
      REG_BR_PROB_BASE);
-  EXIT_BLOCK_PTR->frequency =
-    EXIT_BLOCK_PTR_FOR_FUNCTION (src_cfun)->frequency;
+  EXIT_BLOCK_PTR_FOR_FN (cfun)->frequency =
+    EXIT_BLOCK_PTR_FOR_FN (src_cfun)->frequency;
   if (src_cfun->eh)
     init_eh_for_function ();
 
@@ -2410,7 +2411,7 @@ copy_cfg_body (copy_body_data * id, gcov_type count, int frequency_scale,
      before inlining, using the guessed edge frequencies, so that we don't
      end up with a 0-count inline body which can confuse downstream
      optimizations such as function splitting.  */
-  if (!ENTRY_BLOCK_PTR_FOR_FUNCTION (src_cfun)->count && count)
+  if (!ENTRY_BLOCK_PTR_FOR_FN (src_cfun)->count && count)
     {
       /* Apply the larger of the call bb count and the total incoming
          call edge count to the callee.  */
@@ -2422,10 +2423,10 @@ copy_cfg_body (copy_body_data * id, gcov_type count, int frequency_scale,
       freqs_to_counts (id->src_node, count > in_count ? count : in_count);
     }
 
-  if (ENTRY_BLOCK_PTR_FOR_FUNCTION (src_cfun)->count)
+  if (ENTRY_BLOCK_PTR_FOR_FN (src_cfun)->count)
     count_scale
         = GCOV_COMPUTE_SCALE (count,
-                              ENTRY_BLOCK_PTR_FOR_FUNCTION (src_cfun)->count);
+                              ENTRY_BLOCK_PTR_FOR_FN (src_cfun)->count);
   else
     count_scale = REG_BR_PROB_BASE;
 
@@ -2450,20 +2451,20 @@ copy_cfg_body (copy_body_data * id, gcov_type count, int frequency_scale,
       incoming_count = apply_scale (incoming_count, count_scale);
       incoming_frequency
 	= apply_scale ((gcov_type)incoming_frequency, frequency_scale);
-      ENTRY_BLOCK_PTR->count = incoming_count;
-      ENTRY_BLOCK_PTR->frequency = incoming_frequency;
+      ENTRY_BLOCK_PTR_FOR_FN (cfun)->count = incoming_count;
+      ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency = incoming_frequency;
     }
 
   /* Must have a CFG here at this point.  */
-  gcc_assert (ENTRY_BLOCK_PTR_FOR_FUNCTION
+  gcc_assert (ENTRY_BLOCK_PTR_FOR_FN
 	      (DECL_STRUCT_FUNCTION (callee_fndecl)));
 
   cfun_to_copy = id->src_cfun = DECL_STRUCT_FUNCTION (callee_fndecl);
 
-  ENTRY_BLOCK_PTR_FOR_FUNCTION (cfun_to_copy)->aux = entry_block_map;
-  EXIT_BLOCK_PTR_FOR_FUNCTION (cfun_to_copy)->aux = exit_block_map;
-  entry_block_map->aux = ENTRY_BLOCK_PTR_FOR_FUNCTION (cfun_to_copy);
-  exit_block_map->aux = EXIT_BLOCK_PTR_FOR_FUNCTION (cfun_to_copy);
+  ENTRY_BLOCK_PTR_FOR_FN (cfun_to_copy)->aux = entry_block_map;
+  EXIT_BLOCK_PTR_FOR_FN (cfun_to_copy)->aux = exit_block_map;
+  entry_block_map->aux = ENTRY_BLOCK_PTR_FOR_FN (cfun_to_copy);
+  exit_block_map->aux = EXIT_BLOCK_PTR_FOR_FN (cfun_to_copy);
 
   /* Duplicate any exception-handling regions.  */
   if (cfun->eh)
@@ -2694,7 +2695,7 @@ copy_body (copy_body_data *id, gcov_type count, int frequency_scale,
   tree body;
 
   /* If this body has a CFG, walk CFG and copy.  */
-  gcc_assert (ENTRY_BLOCK_PTR_FOR_FUNCTION (DECL_STRUCT_FUNCTION (fndecl)));
+  gcc_assert (ENTRY_BLOCK_PTR_FOR_FN (DECL_STRUCT_FUNCTION (fndecl)));
   body = copy_cfg_body (id, count, frequency_scale, entry_block_map, exit_block_map,
 			new_entry);
   copy_debug_stmts (id);
@@ -5098,7 +5099,8 @@ delete_unreachable_blocks_update_callgraph (copy_body_data *id)
 
   /* Delete all unreachable basic blocks.  */
 
-  for (b = ENTRY_BLOCK_PTR->next_bb; b != EXIT_BLOCK_PTR; b = next_bb)
+  for (b = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb; b
+       != EXIT_BLOCK_PTR_FOR_FN (cfun); b = next_bb)
     {
       next_bb = b->next_bb;
 
@@ -5294,7 +5296,7 @@ tree_function_versioning (tree old_decl, tree new_decl,
   id.transform_parameter = false;
   id.transform_lang_insert_block = NULL;
 
-  old_entry_block = ENTRY_BLOCK_PTR_FOR_FUNCTION
+  old_entry_block = ENTRY_BLOCK_PTR_FOR_FN
     (DECL_STRUCT_FUNCTION (old_decl));
   DECL_RESULT (new_decl) = DECL_RESULT (old_decl);
   DECL_ARGUMENTS (new_decl) = DECL_ARGUMENTS (old_decl);
@@ -5413,7 +5415,8 @@ tree_function_versioning (tree old_decl, tree new_decl,
 
   /* Copy the Function's body.  */
   copy_body (&id, old_entry_block->count, REG_BR_PROB_BASE,
-	     ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, new_entry);
+	     ENTRY_BLOCK_PTR_FOR_FN (cfun), EXIT_BLOCK_PTR_FOR_FN (cfun),
+	     new_entry);
 
   /* Renumber the lexical scoping (non-code) blocks consecutively.  */
   number_blocks (new_decl);
@@ -5421,7 +5424,7 @@ tree_function_versioning (tree old_decl, tree new_decl,
   /* We want to create the BB unconditionally, so that the addition of
      debug stmts doesn't affect BB count, which may in the end cause
      codegen differences.  */
-  bb = split_edge (single_succ_edge (ENTRY_BLOCK_PTR));
+  bb = split_edge (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
   while (init_stmts.length ())
     insert_init_stmt (&id, bb, init_stmts.pop ());
   update_clone_info (&id);
@@ -5458,7 +5461,7 @@ tree_function_versioning (tree old_decl, tree new_decl,
       struct cgraph_edge *e;
       rebuild_frequencies ();
 
-      new_version_node->count = ENTRY_BLOCK_PTR->count;
+      new_version_node->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
       for (e = new_version_node->callees; e; e = e->next_callee)
 	{
 	  basic_block bb = gimple_bb (e->call_stmt);
diff --git a/gcc/tree-into-ssa.c b/gcc/tree-into-ssa.c
index b2b5799..6cae27e 100644
--- a/gcc/tree-into-ssa.c
+++ b/gcc/tree-into-ssa.c
@@ -1221,10 +1221,12 @@ rewrite_debug_stmt_uses (gimple stmt)
       def = info->current_def;
       if (!def)
 	{
-	  if (TREE_CODE (var) == PARM_DECL && single_succ_p (ENTRY_BLOCK_PTR))
+	  if (TREE_CODE (var) == PARM_DECL
+	      && single_succ_p (ENTRY_BLOCK_PTR_FOR_FN (cfun)))
 	    {
 	      gimple_stmt_iterator gsi
-		= gsi_after_labels (single_succ (ENTRY_BLOCK_PTR));
+		=
+	     gsi_after_labels (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
 	      int lim;
 	      /* Search a few source bind stmts at the start of first bb to
 		 see if a DEBUG_EXPR_DECL can't be reused.  */
@@ -1253,7 +1255,8 @@ rewrite_debug_stmt_uses (gimple stmt)
 		  DECL_ARTIFICIAL (def) = 1;
 		  TREE_TYPE (def) = TREE_TYPE (var);
 		  DECL_MODE (def) = DECL_MODE (var);
-		  gsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR));
+		  gsi =
+		 gsi_after_labels (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
 		  gsi_insert_before (&gsi, def_temp, GSI_SAME_STMT);
 		}
 	      update = true;
@@ -1868,7 +1871,7 @@ maybe_register_def (def_operand_p def_p, gimple stmt,
 		     bind stmts, but there wouldn't be a PC to bind
 		     them to either, so avoid diverging the CFG.  */
 		  if (ef && single_pred_p (ef->dest)
-		      && ef->dest != EXIT_BLOCK_PTR)
+		      && ef->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 		    {
 		      /* If there were PHI nodes in the node, we'd
 			 have to make sure the value we're binding
@@ -2331,7 +2334,7 @@ rewrite_into_ssa (void)
   insert_phi_nodes (dfs);
 
   /* 4- Rename all the blocks.  */
-  rewrite_blocks (ENTRY_BLOCK_PTR, REWRITE_ALL);
+  rewrite_blocks (ENTRY_BLOCK_PTR_FOR_FN (cfun), REWRITE_ALL);
 
   /* Free allocated memory.  */
   FOR_EACH_BB (bb)
@@ -3017,7 +3020,7 @@ insert_updated_phi_nodes_for (tree var, bitmap_head *dfs, bitmap blocks,
 	     common dominator of all the definition blocks.  */
 	  entry = nearest_common_dominator_for_set (CDI_DOMINATORS,
 						    db->def_blocks);
-	  if (entry != ENTRY_BLOCK_PTR)
+	  if (entry != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    EXECUTE_IF_SET_IN_BITMAP (idf, 0, i, bi)
 	      if (BASIC_BLOCK (i) != entry
 		  && dominated_by_p (CDI_DOMINATORS, BASIC_BLOCK (i), entry))
@@ -3216,7 +3219,7 @@ update_ssa (unsigned update_flags)
 	 be possible to determine the nearest block that had a
 	 definition for each of the symbols that are marked for
 	 updating.  For now this seems more work than it's worth.  */
-      start_bb = ENTRY_BLOCK_PTR;
+      start_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
       /* Traverse the CFG looking for existing definitions and uses of
 	 symbols in SSA operands.  Mark interesting blocks and
@@ -3299,7 +3302,7 @@ update_ssa (unsigned update_flags)
       /* Insertion of PHI nodes may have added blocks to the region.
 	 We need to re-compute START_BB to include the newly added
 	 blocks.  */
-      if (start_bb != ENTRY_BLOCK_PTR)
+      if (start_bb != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	start_bb = nearest_common_dominator_for_set (CDI_DOMINATORS,
 						     blocks_to_update);
     }
diff --git a/gcc/tree-outof-ssa.c b/gcc/tree-outof-ssa.c
index 333ef76..9a7a73f 100644
--- a/gcc/tree-outof-ssa.c
+++ b/gcc/tree-outof-ssa.c
@@ -931,7 +931,8 @@ expand_phi_nodes (struct ssaexpand *sa)
   elim_graph g = new_elim_graph (sa->map->num_partitions);
   g->map = sa->map;
 
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR->next_bb, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb,
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     if (!gimple_seq_empty_p (phi_nodes (bb)))
       {
 	edge e;
diff --git a/gcc/tree-profile.c b/gcc/tree-profile.c
index fb4df90..0adc51a 100644
--- a/gcc/tree-profile.c
+++ b/gcc/tree-profile.c
@@ -440,7 +440,8 @@ gimple_gen_ic_func_profiler (void)
     stmt1: __gcov_indirect_call_profiler_v2 (profile_id,
 					     &current_function_decl)
    */
-  gsi = gsi_after_labels (split_edge (single_succ_edge (ENTRY_BLOCK_PTR)));
+  gsi =
+					     gsi_after_labels (split_edge (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun))));
 
   cur_func = force_gimple_operand_gsi (&gsi,
 				       build_addr (current_function_decl,
diff --git a/gcc/tree-scalar-evolution.h b/gcc/tree-scalar-evolution.h
index db7ac4c..8846fbe 100644
--- a/gcc/tree-scalar-evolution.h
+++ b/gcc/tree-scalar-evolution.h
@@ -47,7 +47,7 @@ static inline basic_block
 block_before_loop (loop_p loop)
 {
   edge preheader = loop_preheader_edge (loop);
-  return (preheader ? preheader->src : ENTRY_BLOCK_PTR);
+  return (preheader ? preheader->src : ENTRY_BLOCK_PTR_FOR_FN (cfun));
 }
 
 /* Analyze all the parameters of the chrec that were left under a
diff --git a/gcc/tree-sra.c b/gcc/tree-sra.c
index ea1986c..5432048 100644
--- a/gcc/tree-sra.c
+++ b/gcc/tree-sra.c
@@ -3409,7 +3409,7 @@ initialize_parameter_reductions (void)
 
   seq = gsi_seq (gsi);
   if (seq)
-    gsi_insert_seq_on_edge_immediate (single_succ_edge (ENTRY_BLOCK_PTR), seq);
+    gsi_insert_seq_on_edge_immediate (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)), seq);
 }
 
 /* The "main" function of intraprocedural SRA passes.  Runs the analysis and if
@@ -3788,7 +3788,7 @@ propagate_dereference_distances (void)
   basic_block bb;
 
   queue.create (last_basic_block_for_function (cfun));
-  queue.quick_push (ENTRY_BLOCK_PTR);
+  queue.quick_push (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   FOR_EACH_BB (bb)
     {
       queue.quick_push (bb);
@@ -3818,7 +3818,7 @@ propagate_dereference_distances (void)
 	  {
 	    int succ_idx = e->dest->index * func_param_count + i;
 
-	    if (e->src == EXIT_BLOCK_PTR)
+	    if (e->src == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	      continue;
 
 	    if (first)
@@ -3859,10 +3859,11 @@ dump_dereferences_table (FILE *f, const char *str, HOST_WIDE_INT *table)
   basic_block bb;
 
   fprintf (dump_file, str);
-  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, EXIT_BLOCK_PTR, next_bb)
+  FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun),
+		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
     {
       fprintf (f, "%4i  %i   ", bb->index, bitmap_bit_p (final_bbs, bb->index));
-      if (bb != EXIT_BLOCK_PTR)
+      if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	{
 	  int i;
 	  for (i = 0; i < func_param_count; i++)
@@ -3914,7 +3915,7 @@ analyze_caller_dereference_legality (vec<access_p> representatives)
   for (i = 0; i < func_param_count; i++)
     {
       struct access *repr = representatives[i];
-      int idx = ENTRY_BLOCK_PTR->index * func_param_count + i;
+      int idx = ENTRY_BLOCK_PTR_FOR_FN (cfun)->index * func_param_count + i;
 
       if (!repr || no_accesses_p (repr))
 	continue;
@@ -4728,9 +4729,9 @@ sra_ipa_reset_debug_stmts (ipa_parm_adjustment_vec adjustments)
   int i, len;
   gimple_stmt_iterator *gsip = NULL, gsi;
 
-  if (MAY_HAVE_DEBUG_STMTS && single_succ_p (ENTRY_BLOCK_PTR))
+  if (MAY_HAVE_DEBUG_STMTS && single_succ_p (ENTRY_BLOCK_PTR_FOR_FN (cfun)))
     {
-      gsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR));
+      gsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
       gsip = &gsi;
     }
   len = adjustments.length ();
diff --git a/gcc/tree-ssa-ccp.c b/gcc/tree-ssa-ccp.c
index 6a542b8..3a9875d 100644
--- a/gcc/tree-ssa-ccp.c
+++ b/gcc/tree-ssa-ccp.c
@@ -1824,7 +1824,7 @@ gsi_prev_dom_bb_nondebug (gimple_stmt_iterator *i)
   while (gsi_end_p (*i))
     {
       dom = get_immediate_dominator (CDI_DOMINATORS, i->bb);
-      if (dom == NULL || dom == ENTRY_BLOCK_PTR)
+      if (dom == NULL || dom == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	return;
 
       *i = gsi_last_bb (dom);
@@ -2314,7 +2314,7 @@ optimize_stack_restore (gimple_stmt_iterator i)
     case 0:
       break;
     case 1:
-      if (single_succ_edge (bb)->dest != EXIT_BLOCK_PTR)
+      if (single_succ_edge (bb)->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	return NULL_TREE;
       break;
     default:
diff --git a/gcc/tree-ssa-coalesce.c b/gcc/tree-ssa-coalesce.c
index cc46370..d6fbb1c 100644
--- a/gcc/tree-ssa-coalesce.c
+++ b/gcc/tree-ssa-coalesce.c
@@ -1078,7 +1078,7 @@ create_outofssa_var_map (coalesce_list_p cl, bitmap used_in_copy)
 		  v2 = SSA_NAME_VERSION (var);
 		  bitmap_set_bit (used_in_copy, v1);
 		  bitmap_set_bit (used_in_copy, v2);
-		  cost = coalesce_cost_bb (EXIT_BLOCK_PTR);
+		  cost = coalesce_cost_bb (EXIT_BLOCK_PTR_FOR_FN (cfun));
 		  add_coalesce (cl, v1, v2, cost);
 		}
 	    }
diff --git a/gcc/tree-ssa-dce.c b/gcc/tree-ssa-dce.c
index e07bd42..0c8110f 100644
--- a/gcc/tree-ssa-dce.c
+++ b/gcc/tree-ssa-dce.c
@@ -328,9 +328,9 @@ mark_control_dependent_edges_necessary (basic_block bb, bool ignore_self)
   unsigned edge_number;
   bool skipped = false;
 
-  gcc_assert (bb != EXIT_BLOCK_PTR);
+  gcc_assert (bb != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
-  if (bb == ENTRY_BLOCK_PTR)
+  if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
     return;
 
   EXECUTE_IF_SET_IN_BITMAP (cd->get_edges_dependent_on (bb->index),
@@ -636,7 +636,7 @@ propagate_necessity (bool aggressive)
 	     containing STMT is control dependent, but only if we haven't
 	     already done so.  */
 	  basic_block bb = gimple_bb (stmt);
-	  if (bb != ENTRY_BLOCK_PTR
+	  if (bb != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 	      && !bitmap_bit_p (visited_control_parents, bb->index))
 	    mark_control_dependent_edges_necessary (bb, false);
 	}
@@ -742,7 +742,7 @@ propagate_necessity (bool aggressive)
 		      if (!bitmap_bit_p (last_stmt_necessary, arg_bb->index))
 			mark_last_stmt_necessary (arg_bb);
 		    }
-		  else if (arg_bb != ENTRY_BLOCK_PTR
+		  else if (arg_bb != ENTRY_BLOCK_PTR_FOR_FN (cfun)
 		           && !bitmap_bit_p (visited_control_parents,
 					 arg_bb->index))
 		    mark_control_dependent_edges_necessary (arg_bb, true);
@@ -1076,7 +1076,7 @@ remove_dead_stmt (gimple_stmt_iterator *i, basic_block bb)
 	 fake edges in the dominator tree.  */
       if (e)
         ;
-      else if (! post_dom_bb || post_dom_bb == EXIT_BLOCK_PTR)
+      else if (! post_dom_bb || post_dom_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	e = EDGE_SUCC (bb, 0);
       else
         e = forward_edge_to_pdom (EDGE_SUCC (bb, 0), post_dom_bb);
@@ -1168,7 +1168,8 @@ eliminate_unnecessary_stmts (void)
 
      as desired.  */
   gcc_assert (dom_info_available_p (CDI_DOMINATORS));
-  h = get_all_dominated_blocks (CDI_DOMINATORS, single_succ (ENTRY_BLOCK_PTR));
+  h = get_all_dominated_blocks (CDI_DOMINATORS,
+				single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
 
   while (h.length ())
     {
@@ -1265,7 +1266,8 @@ eliminate_unnecessary_stmts (void)
       find_unreachable_blocks ();
 
       /* Delete all unreachable basic blocks in reverse dominator order.  */
-      for (bb = EXIT_BLOCK_PTR->prev_bb; bb != ENTRY_BLOCK_PTR; bb = prev_bb)
+      for (bb = EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb;
+	   bb != ENTRY_BLOCK_PTR_FOR_FN (cfun); bb = prev_bb)
 	{
 	  prev_bb = bb->prev_bb;
 
diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c
index bfd865d..a286c10 100644
--- a/gcc/tree-ssa-dom.c
+++ b/gcc/tree-ssa-dom.c
@@ -902,7 +902,7 @@ tree_ssa_dominator_optimize (void)
 	  while (single_succ_p (bb)
 		 && (single_succ_edge (bb)->flags & EDGE_EH) == 0)
 	    bb = single_succ (bb);
-	  if (bb == EXIT_BLOCK_PTR)
+	  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    continue;
 	  if ((unsigned) bb->index != i)
 	    bitmap_set_bit (need_eh_cleanup, bb->index);
@@ -3054,7 +3054,8 @@ eliminate_degenerate_phis (void)
      phase in dominator order.  Presumably this is because walking
      in dominator order leaves fewer PHIs for later examination
      by the worklist phase.  */
-  eliminate_degenerate_phis_1 (ENTRY_BLOCK_PTR, interesting_names);
+  eliminate_degenerate_phis_1 (ENTRY_BLOCK_PTR_FOR_FN (cfun),
+			       interesting_names);
 
   /* Second phase.  Eliminate second order degenerate PHIs as well
      as trivial copies or constant initializations identified by
diff --git a/gcc/tree-ssa-live.c b/gcc/tree-ssa-live.c
index 5dc8d02..51b4101 100644
--- a/gcc/tree-ssa-live.c
+++ b/gcc/tree-ssa-live.c
@@ -1009,7 +1009,7 @@ loe_visit_block (tree_live_info_p live, basic_block bb, sbitmap visited,
   FOR_EACH_EDGE (e, ei, bb->preds)
     {
       pred_bb = e->src;
-      if (pred_bb == ENTRY_BLOCK_PTR)
+      if (pred_bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	continue;
       /* TMP is variables live-on-entry from BB that aren't defined in the
 	 predecessor block.  This should be the live on entry vars to pred.
@@ -1087,7 +1087,7 @@ set_var_live_on_entry (tree ssa_name, tree_live_info_p live)
 	bitmap_set_bit (&live->liveout[def_bb->index], p);
     }
   else
-    def_bb = ENTRY_BLOCK_PTR;
+    def_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   /* Visit each use of SSA_NAME and if it isn't in the same block as the def,
      add it to the list of live on entry blocks.  */
@@ -1103,7 +1103,7 @@ set_var_live_on_entry (tree ssa_name, tree_live_info_p live)
 	     defined in that block, or whether its live on entry.  */
 	  int index = PHI_ARG_INDEX_FROM_USE (use);
 	  edge e = gimple_phi_arg_edge (use_stmt, index);
-	  if (e->src != ENTRY_BLOCK_PTR)
+	  if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      if (e->src != def_bb)
 		add_block = e->src;
@@ -1169,14 +1169,14 @@ calculate_live_on_exit (tree_live_info_p liveinfo)
 	      if (p == NO_PARTITION)
 		continue;
 	      e = gimple_phi_arg_edge (phi, i);
-	      if (e->src != ENTRY_BLOCK_PTR)
+	      if (e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 		bitmap_set_bit (&liveinfo->liveout[e->src->index], p);
 	    }
 	}
 
       /* Add each successors live on entry to this bock live on exit.  */
       FOR_EACH_EDGE (e, ei, bb->succs)
-        if (e->dest != EXIT_BLOCK_PTR)
+	if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	  bitmap_ior_into (&liveinfo->liveout[bb->index],
 			   live_on_entry (liveinfo, e->dest));
     }
@@ -1369,12 +1369,12 @@ verify_live_on_entry (tree_live_info_p live)
    /* Check for live on entry partitions and report those with a DEF in
       the program. This will typically mean an optimization has done
       something wrong.  */
-  bb = ENTRY_BLOCK_PTR;
+  bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
   num = 0;
   FOR_EACH_EDGE (e, ei, bb->succs)
     {
       int entry_block = e->dest->index;
-      if (e->dest == EXIT_BLOCK_PTR)
+      if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
         continue;
       for (i = 0; i < (unsigned)num_var_partitions (map); i++)
 	{
diff --git a/gcc/tree-ssa-live.h b/gcc/tree-ssa-live.h
index 0aa9f0c..e8074bd 100644
--- a/gcc/tree-ssa-live.h
+++ b/gcc/tree-ssa-live.h
@@ -273,8 +273,8 @@ static inline bitmap
 live_on_entry (tree_live_info_p live, basic_block bb)
 {
   gcc_checking_assert (live->livein
-		       && bb != ENTRY_BLOCK_PTR
-		       && bb != EXIT_BLOCK_PTR);
+		       && bb != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+		       && bb != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   return &live->livein[bb->index];
 }
@@ -287,8 +287,8 @@ static inline bitmap
 live_on_exit (tree_live_info_p live, basic_block bb)
 {
   gcc_checking_assert (live->liveout
-		       && bb != ENTRY_BLOCK_PTR
-		       && bb != EXIT_BLOCK_PTR);
+		       && bb != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+		       && bb != EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   return &live->liveout[bb->index];
 }
diff --git a/gcc/tree-ssa-loop-ivopts.c b/gcc/tree-ssa-loop-ivopts.c
index c20ffe6..6d7d78e 100644
--- a/gcc/tree-ssa-loop-ivopts.c
+++ b/gcc/tree-ssa-loop-ivopts.c
@@ -2007,7 +2007,7 @@ find_interesting_uses (struct ivopts_data *data)
       bb = body[i];
 
       FOR_EACH_EDGE (e, ei, bb->succs)
-	if (e->dest != EXIT_BLOCK_PTR
+	if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	    && !flow_bb_inside_loop_p (data->current_loop, e->dest))
 	  find_interesting_uses_outside (data, e);
 
diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
index 246b667..6729167 100644
--- a/gcc/tree-ssa-loop-manip.c
+++ b/gcc/tree-ssa-loop-manip.c
@@ -231,7 +231,7 @@ compute_live_loop_exits (bitmap live_exits, bitmap use_blocks,
 	  bool pred_visited;
 
 	  /* We should have met DEF_BB along the way.  */
-	  gcc_assert (pred != ENTRY_BLOCK_PTR);
+	  gcc_assert (pred != ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
 	  if (pred_loop_depth >= def_loop_depth)
 	    {
diff --git a/gcc/tree-ssa-loop-niter.c b/gcc/tree-ssa-loop-niter.c
index 1e0dcd6..9c61c3c 100644
--- a/gcc/tree-ssa-loop-niter.c
+++ b/gcc/tree-ssa-loop-niter.c
@@ -496,7 +496,7 @@ bound_difference (struct loop *loop, tree x, tree y, bounds *bnds)
   /* Now walk the dominators of the loop header and use the entry
      guards to refine the estimates.  */
   for (bb = loop->header;
-       bb != ENTRY_BLOCK_PTR && cnt < MAX_DOMINATORS_TO_WALK;
+       bb != ENTRY_BLOCK_PTR_FOR_FN (cfun) && cnt < MAX_DOMINATORS_TO_WALK;
        bb = get_immediate_dominator (CDI_DOMINATORS, bb))
     {
       if (!single_pred_p (bb))
@@ -1781,7 +1781,7 @@ simplify_using_initial_conditions (struct loop *loop, tree expr)
      the number of BBs times the number of loops in degenerate
      cases.  */
   for (bb = loop->header;
-       bb != ENTRY_BLOCK_PTR && cnt < MAX_DOMINATORS_TO_WALK;
+       bb != ENTRY_BLOCK_PTR_FOR_FN (cfun) && cnt < MAX_DOMINATORS_TO_WALK;
        bb = get_immediate_dominator (CDI_DOMINATORS, bb))
     {
       if (!single_pred_p (bb))
diff --git a/gcc/tree-ssa-loop-prefetch.c b/gcc/tree-ssa-loop-prefetch.c
index 4e49d76..f2b4e95 100644
--- a/gcc/tree-ssa-loop-prefetch.c
+++ b/gcc/tree-ssa-loop-prefetch.c
@@ -1282,7 +1282,7 @@ may_use_storent_in_loop_p (struct loop *loop)
 
       FOR_EACH_VEC_ELT (exits, i, exit)
 	if ((exit->flags & EDGE_ABNORMAL)
-	    && exit->dest == EXIT_BLOCK_PTR)
+	    && exit->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 	  ret = false;
 
       exits.release ();
diff --git a/gcc/tree-ssa-loop-unswitch.c b/gcc/tree-ssa-loop-unswitch.c
index 9f4d492..27f52b2 100644
--- a/gcc/tree-ssa-loop-unswitch.c
+++ b/gcc/tree-ssa-loop-unswitch.c
@@ -194,7 +194,7 @@ simplify_using_entry_checks (struct loop *loop, tree cond)
 	return cond;
 
       e = single_pred_edge (e->src);
-      if (e->src == ENTRY_BLOCK_PTR)
+      if (e->src == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	return cond;
     }
 }
diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c
index 67117bc..ce7116e 100644
--- a/gcc/tree-ssa-math-opts.c
+++ b/gcc/tree-ssa-math-opts.c
@@ -288,7 +288,7 @@ register_division_in (basic_block bb)
   if (!occ)
     {
       occ = occ_new (bb, NULL);
-      insert_bb (occ, ENTRY_BLOCK_PTR, &occ_head);
+      insert_bb (occ, ENTRY_BLOCK_PTR_FOR_FN (cfun), &occ_head);
     }
 
   occ->bb_has_division = true;
diff --git a/gcc/tree-ssa-phiprop.c b/gcc/tree-ssa-phiprop.c
index e764040..389423b 100644
--- a/gcc/tree-ssa-phiprop.c
+++ b/gcc/tree-ssa-phiprop.c
@@ -381,7 +381,7 @@ tree_ssa_phiprop (void)
 
   /* Walk the dominator tree in preorder.  */
   bbs = get_all_dominated_blocks (CDI_DOMINATORS,
-				  single_succ (ENTRY_BLOCK_PTR));
+				  single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
   FOR_EACH_VEC_ELT (bbs, i, bb)
     for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       did_something |= propagate_with_phi (bb, gsi_stmt (gsi), phivn, n);
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index b16fd17..29d56b1 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -2467,7 +2467,7 @@ compute_antic (void)
     }
 
   /* At the exit block we anticipate nothing.  */
-  BB_VISITED (EXIT_BLOCK_PTR) = 1;
+  BB_VISITED (EXIT_BLOCK_PTR_FOR_FN (cfun)) = 1;
 
   changed_blocks = sbitmap_alloc (last_basic_block + 1);
   bitmap_ones (changed_blocks);
@@ -3668,7 +3668,7 @@ insert (void)
       num_iterations++;
       if (dump_file && dump_flags & TDF_DETAILS)
 	fprintf (dump_file, "Starting insert iteration %d\n", num_iterations);
-      new_stuff = insert_aux (ENTRY_BLOCK_PTR);
+      new_stuff = insert_aux (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
       /* Clear the NEW sets before the next iteration.  We have already
          fully propagated its contents.  */
@@ -3713,15 +3713,16 @@ compute_avail (void)
 
       e = get_or_alloc_expr_for_name (name);
       add_to_value (get_expr_value_id (e), e);
-      bitmap_insert_into_set (TMP_GEN (ENTRY_BLOCK_PTR), e);
-      bitmap_value_insert_into_set (AVAIL_OUT (ENTRY_BLOCK_PTR), e);
+      bitmap_insert_into_set (TMP_GEN (ENTRY_BLOCK_PTR_FOR_FN (cfun)), e);
+      bitmap_value_insert_into_set (AVAIL_OUT (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
+				    e);
     }
 
   if (dump_file && (dump_flags & TDF_DETAILS))
     {
-      print_bitmap_set (dump_file, TMP_GEN (ENTRY_BLOCK_PTR),
+      print_bitmap_set (dump_file, TMP_GEN (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
 			"tmp_gen", ENTRY_BLOCK);
-      print_bitmap_set (dump_file, AVAIL_OUT (ENTRY_BLOCK_PTR),
+      print_bitmap_set (dump_file, AVAIL_OUT (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
 			"avail_out", ENTRY_BLOCK);
     }
 
@@ -3730,7 +3731,7 @@ compute_avail (void)
 
   /* Seed the algorithm by putting the dominator children of the entry
      block on the worklist.  */
-  for (son = first_dom_son (CDI_DOMINATORS, ENTRY_BLOCK_PTR);
+  for (son = first_dom_son (CDI_DOMINATORS, ENTRY_BLOCK_PTR_FOR_FN (cfun));
        son;
        son = next_dom_son (CDI_DOMINATORS, son))
     worklist[sp++] = son;
diff --git a/gcc/tree-ssa-propagate.c b/gcc/tree-ssa-propagate.c
index bd33071..b9db34c5 100644
--- a/gcc/tree-ssa-propagate.c
+++ b/gcc/tree-ssa-propagate.c
@@ -184,7 +184,8 @@ cfg_blocks_add (basic_block bb)
 {
   bool head = false;
 
-  gcc_assert (bb != ENTRY_BLOCK_PTR && bb != EXIT_BLOCK_PTR);
+  gcc_assert (bb != ENTRY_BLOCK_PTR_FOR_FN (cfun)
+	      && bb != EXIT_BLOCK_PTR_FOR_FN (cfun));
   gcc_assert (!bitmap_bit_p (bb_in_list, bb->index));
 
   if (cfg_blocks_empty_p ())
@@ -279,7 +280,7 @@ static void
 add_control_edge (edge e)
 {
   basic_block bb = e->dest;
-  if (bb == EXIT_BLOCK_PTR)
+  if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return;
 
   /* If the edge had already been executed, skip it.  */
@@ -408,7 +409,7 @@ simulate_block (basic_block block)
   gimple_stmt_iterator gsi;
 
   /* There is nothing to do for the exit block.  */
-  if (block == EXIT_BLOCK_PTR)
+  if (block == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return;
 
   if (dump_file && (dump_flags & TDF_DETAILS))
@@ -519,7 +520,7 @@ ssa_prop_init (void)
 
   /* Seed the algorithm by adding the successors of the entry block to the
      edge worklist.  */
-  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR->succs)
+  FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     add_control_edge (e);
 }
 
diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c
index eedccc6..4c4924c 100644
--- a/gcc/tree-ssa-reassoc.c
+++ b/gcc/tree-ssa-reassoc.c
@@ -1270,11 +1270,11 @@ build_and_add_sum (tree type, tree op1, tree op2, enum tree_code opcode)
   if ((!op1def || gimple_nop_p (op1def))
       && (!op2def || gimple_nop_p (op2def)))
     {
-      gsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR));
+      gsi = gsi_after_labels (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
       if (gsi_end_p (gsi))
 	{
 	  gimple_stmt_iterator gsi2
-	    = gsi_last_bb (single_succ (ENTRY_BLOCK_PTR));
+	    = gsi_last_bb (single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
 	  gimple_set_uid (sum,
 			  gsi_end_p (gsi2) ? 1 : gimple_uid (gsi_stmt (gsi2)));
 	}
@@ -4529,8 +4529,8 @@ debug_ops_vector (vec<operand_entry_t> ops)
 static void
 do_reassoc (void)
 {
-  break_up_subtract_bb (ENTRY_BLOCK_PTR);
-  reassociate_bb (EXIT_BLOCK_PTR);
+  break_up_subtract_bb (ENTRY_BLOCK_PTR_FOR_FN (cfun));
+  reassociate_bb (EXIT_BLOCK_PTR_FOR_FN (cfun));
 }
 
 /* Initialize the reassociation pass.  */
diff --git a/gcc/tree-ssa-sink.c b/gcc/tree-ssa-sink.c
index f0c831d..305882d 100644
--- a/gcc/tree-ssa-sink.c
+++ b/gcc/tree-ssa-sink.c
@@ -170,7 +170,7 @@ nearest_common_dominator_of_uses (gimple stmt, bool *debug_stmts)
 	    }
 
 	  /* Short circuit. Nothing dominates the entry block.  */
-	  if (useblock == ENTRY_BLOCK_PTR)
+	  if (useblock == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    {
 	      BITMAP_FREE (blocks);
 	      return NULL;
@@ -568,7 +568,7 @@ execute_sink_code (void)
   memset (&sink_stats, 0, sizeof (sink_stats));
   calculate_dominance_info (CDI_DOMINATORS);
   calculate_dominance_info (CDI_POST_DOMINATORS);
-  sink_code_in_bb (EXIT_BLOCK_PTR);
+  sink_code_in_bb (EXIT_BLOCK_PTR_FOR_FN (cfun));
   statistics_counter_event (cfun, "Sunk statements", sink_stats.sunk);
   free_dominance_info (CDI_POST_DOMINATORS);
   remove_fake_exit_edges ();
diff --git a/gcc/tree-ssa-uninit.c b/gcc/tree-ssa-uninit.c
index a15e37c..3b8d1df 100644
--- a/gcc/tree-ssa-uninit.c
+++ b/gcc/tree-ssa-uninit.c
@@ -175,7 +175,7 @@ warn_uninitialized_vars (bool warn_possibly_uninitialized)
   FOR_EACH_BB (bb)
     {
       bool always_executed = dominated_by_p (CDI_POST_DOMINATORS,
-					     single_succ (ENTRY_BLOCK_PTR), bb);
+					     single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)), bb);
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
 	  gimple stmt = gsi_stmt (gsi);
@@ -315,14 +315,14 @@ compute_uninit_opnds_pos (gimple phi)
 static inline basic_block
 find_pdom (basic_block block)
 {
-   if (block == EXIT_BLOCK_PTR)
-     return EXIT_BLOCK_PTR;
+   if (block == EXIT_BLOCK_PTR_FOR_FN (cfun))
+     return EXIT_BLOCK_PTR_FOR_FN (cfun);
    else
      {
        basic_block bb
            = get_immediate_dominator (CDI_POST_DOMINATORS, block);
        if (! bb)
-         return EXIT_BLOCK_PTR;
+	 return EXIT_BLOCK_PTR_FOR_FN (cfun);
        return bb;
      }
 }
@@ -333,13 +333,13 @@ find_pdom (basic_block block)
 static inline basic_block
 find_dom (basic_block block)
 {
-   if (block == ENTRY_BLOCK_PTR)
-     return ENTRY_BLOCK_PTR;
+   if (block == ENTRY_BLOCK_PTR_FOR_FN (cfun))
+     return ENTRY_BLOCK_PTR_FOR_FN (cfun);
    else
      {
        basic_block bb = get_immediate_dominator (CDI_DOMINATORS, block);
        if (! bb)
-         return ENTRY_BLOCK_PTR;
+	 return ENTRY_BLOCK_PTR_FOR_FN (cfun);
        return bb;
      }
 }
@@ -454,7 +454,8 @@ compute_control_dep_chain (basic_block bb, basic_block dep_bb,
 
           cd_bb = find_pdom (cd_bb);
           post_dom_check++;
-          if (cd_bb == EXIT_BLOCK_PTR || post_dom_check > MAX_POSTDOM_CHECK)
+	  if (cd_bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || post_dom_check >
+	      MAX_POSTDOM_CHECK)
             break;
         }
       cur_cd_chain->pop ();
diff --git a/gcc/tree-stdarg.c b/gcc/tree-stdarg.c
index 221e7d7..9829374 100644
--- a/gcc/tree-stdarg.c
+++ b/gcc/tree-stdarg.c
@@ -97,7 +97,7 @@ reachable_at_most_once (basic_block va_arg_bb, basic_block va_start_bb)
 	  break;
 	}
 
-      gcc_assert (src != ENTRY_BLOCK_PTR);
+      gcc_assert (src != ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
       if (! bitmap_bit_p (visited, src->index))
 	{
diff --git a/gcc/tree-tailcall.c b/gcc/tree-tailcall.c
index 33677ce..9a30400 100644
--- a/gcc/tree-tailcall.c
+++ b/gcc/tree-tailcall.c
@@ -821,7 +821,7 @@ eliminate_tail_call (struct tailcall *t)
 
   gcc_assert (is_gimple_call (stmt));
 
-  first = single_succ (ENTRY_BLOCK_PTR);
+  first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
   /* Remove the code after call_gsi that will become unreachable.  The
      possibly unreachable code in other blocks is removed later in
@@ -842,9 +842,10 @@ eliminate_tail_call (struct tailcall *t)
 
   /* Number of executions of function has reduced by the tailcall.  */
   e = single_succ_edge (gsi_bb (t->call_gsi));
-  decrease_profile (EXIT_BLOCK_PTR, e->count, EDGE_FREQUENCY (e));
-  decrease_profile (ENTRY_BLOCK_PTR, e->count, EDGE_FREQUENCY (e));
-  if (e->dest != EXIT_BLOCK_PTR)
+  decrease_profile (EXIT_BLOCK_PTR_FOR_FN (cfun), e->count, EDGE_FREQUENCY (e));
+  decrease_profile (ENTRY_BLOCK_PTR_FOR_FN (cfun), e->count,
+		    EDGE_FREQUENCY (e));
+  if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
     decrease_profile (e->dest, e->count, EDGE_FREQUENCY (e));
 
   /* Replace the call by a jump to the start of function.  */
@@ -948,7 +949,7 @@ tree_optimize_tail_calls_1 (bool opt_tailcalls)
   bool phis_constructed = false;
   struct tailcall *tailcalls = NULL, *act, *next;
   bool changed = false;
-  basic_block first = single_succ (ENTRY_BLOCK_PTR);
+  basic_block first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   tree param;
   gimple stmt;
   edge_iterator ei;
@@ -958,7 +959,7 @@ tree_optimize_tail_calls_1 (bool opt_tailcalls)
   if (opt_tailcalls)
     opt_tailcalls = suitable_for_tail_call_opt_p ();
 
-  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+  FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
     {
       /* Only traverse the normal exits, i.e. those that end with return
 	 statement.  */
@@ -982,7 +983,8 @@ tree_optimize_tail_calls_1 (bool opt_tailcalls)
 	     or if there are existing degenerate PHI nodes.  */
 	  if (!single_pred_p (first)
 	      || !gimple_seq_empty_p (phi_nodes (first)))
-	    first = split_edge (single_succ_edge (ENTRY_BLOCK_PTR));
+	    first =
+	      split_edge (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)));
 
 	  /* Copy the args if needed.  */
 	  for (param = DECL_ARGUMENTS (current_function_decl);
@@ -1029,7 +1031,7 @@ tree_optimize_tail_calls_1 (bool opt_tailcalls)
   if (a_acc || m_acc)
     {
       /* Modify the remaining return statements.  */
-      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR->preds)
+      FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
 	{
 	  stmt = last_stmt (e->src);
 
diff --git a/gcc/tsan.c b/gcc/tsan.c
index 42730f0..9330074 100644
--- a/gcc/tsan.c
+++ b/gcc/tsan.c
@@ -652,7 +652,7 @@ instrument_func_entry (void)
   tree ret_addr, builtin_decl;
   gimple g;
 
-  succ_bb = single_succ (ENTRY_BLOCK_PTR);
+  succ_bb = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   gsi = gsi_after_labels (succ_bb);
 
   builtin_decl = builtin_decl_implicit (BUILT_IN_RETURN_ADDRESS);
@@ -682,7 +682,7 @@ instrument_func_exit (void)
   edge_iterator ei;
 
   /* Find all function exits.  */
-  exit_bb = EXIT_BLOCK_PTR;
+  exit_bb = EXIT_BLOCK_PTR_FOR_FN (cfun);
   FOR_EACH_EDGE (e, ei, exit_bb->preds)
     {
       gsi = gsi_last_bb (e->src);
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index cfda63a..591747b 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -836,16 +836,18 @@ vt_stack_adjustments (void)
   int sp;
 
   /* Initialize entry block.  */
-  VTI (ENTRY_BLOCK_PTR)->visited = true;
-  VTI (ENTRY_BLOCK_PTR)->in.stack_adjust = INCOMING_FRAME_SP_OFFSET;
-  VTI (ENTRY_BLOCK_PTR)->out.stack_adjust = INCOMING_FRAME_SP_OFFSET;
+  VTI (ENTRY_BLOCK_PTR_FOR_FN (cfun))->visited = true;
+  VTI (ENTRY_BLOCK_PTR_FOR_FN (cfun))->in.stack_adjust =
+ INCOMING_FRAME_SP_OFFSET;
+  VTI (ENTRY_BLOCK_PTR_FOR_FN (cfun))->out.stack_adjust =
+ INCOMING_FRAME_SP_OFFSET;
 
   /* Allocate stack for back-tracking up CFG.  */
   stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Push the first edge on to the stack.  */
-  stack[sp++] = ei_start (ENTRY_BLOCK_PTR->succs);
+  stack[sp++] = ei_start (ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs);
 
   while (sp)
     {
@@ -866,7 +868,7 @@ vt_stack_adjustments (void)
 	  VTI (dest)->visited = true;
 	  VTI (dest)->in.stack_adjust = offset = VTI (src)->out.stack_adjust;
 
-	  if (dest != EXIT_BLOCK_PTR)
+	  if (dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    for (insn = BB_HEAD (dest);
 		 insn != NEXT_INSN (BB_END (dest));
 		 insn = NEXT_INSN (insn))
@@ -7035,7 +7037,7 @@ vt_find_locations (void)
 		{
 		  FOR_EACH_EDGE (e, ei, bb->succs)
 		    {
-		      if (e->dest == EXIT_BLOCK_PTR)
+		      if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
 			continue;
 
 		      if (bitmap_bit_p (visited, e->dest->index))
@@ -9584,7 +9586,7 @@ vt_add_function_parameter (tree parm)
   if (!track_loc_p (incoming, parm, offset, false, &mode, &offset))
     return;
 
-  out = &VTI (ENTRY_BLOCK_PTR)->out;
+  out = &VTI (ENTRY_BLOCK_PTR_FOR_FN (cfun))->out;
 
   dv = dv_from_decl (parm);
 
@@ -9931,7 +9933,7 @@ vt_initialize (void)
       for (;;)
 	{
 	  edge e;
-	  if (bb->next_bb == EXIT_BLOCK_PTR
+	  if (bb->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun)
 	      || ! single_pred_p (bb->next_bb))
 	    break;
 	  e = find_edge (bb, bb->next_bb);
@@ -10034,7 +10036,7 @@ vt_initialize (void)
     }
 
   hard_frame_pointer_adjustment = -1;
-  VTI (ENTRY_BLOCK_PTR)->flooded = true;
+  VTI (ENTRY_BLOCK_PTR_FOR_FN (cfun))->flooded = true;
   cfa_base_rtx = NULL_RTX;
   return true;
 }
diff --git a/gcc/varasm.c b/gcc/varasm.c
index 0f94465..3ca4700 100644
--- a/gcc/varasm.c
+++ b/gcc/varasm.c
@@ -1639,7 +1639,7 @@ assemble_start_function (tree decl, const char *fnname)
 	 align the hot section and write out the hot section label.
 	 But if the current function is a thunk, we do not have a CFG.  */
       if (!cfun->is_thunk
-	  && BB_PARTITION (ENTRY_BLOCK_PTR->next_bb) == BB_COLD_PARTITION)
+	  && BB_PARTITION (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb) == BB_COLD_PARTITION)
 	{
 	  switch_to_section (text_section);
 	  assemble_align (DECL_ALIGN (decl));

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 01/13] Rename macros (basic_block_info_for_function, BASIC_BLOCK_FOR_FUNCTION, SET_BASIC_BLOCK_FOR_FUNCTION)
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
@ 2013-12-06 14:52                     ` David Malcolm
  2013-12-06 14:53                     ` [PATCH 04/13] Rename profile_status_for_function to profile_status_for_fn David Malcolm
                                       ` (12 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 14:52 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (basic_block_info_for_function): Rename to...
	(basic_block_info_for_fn): ...this.
	(BASIC_BLOCK_FOR_FUNCTION): Rename to...
	(BASIC_BLOCK_FOR_FN): ...this.
	(SET_BASIC_BLOCK_FOR_FUNCTION): Rename to...
	(SET_BASIC_BLOCK_FOR_FN): ...this.

	* gimple-streamer-in.c (input_phi, input_bb): Update for renaming
	of BASIC_BLOCK_FOR_FUNCTION to BASIC_BLOCK_FOR_FN.
	* ipa-utils.c (ipa_merge_profiles): Likewise.
	* lto-streamer-in.c (make_new_block): Update for renaming of
	SET_BASIC_BLOCK_FOR_FUNCTION to SET_BASIC_BLOCK_FOR_FN.
	(input_cfg): Update for renamings.
	* tree-cfg.c (init_empty_tree_cfg_for_function): Likewise.
	(dump_function_to_file): Update for renaming of
	basic_block_info_for_function to basic_block_info_for_fn.
---
 gcc/basic-block.h        | 10 +++++-----
 gcc/gimple-streamer-in.c |  4 ++--
 gcc/ipa-utils.c          |  4 ++--
 gcc/lto-streamer-in.c    | 14 +++++++-------
 gcc/tree-cfg.c           | 12 +++++-------
 5 files changed, 21 insertions(+), 23 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 58bacc3..234f6e9 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -314,17 +314,17 @@ struct GTY(()) control_flow_graph {
 /* Defines for accessing the fields of the CFG structure for function FN.  */
 #define ENTRY_BLOCK_PTR_FOR_FN(FN)	     ((FN)->cfg->x_entry_block_ptr)
 #define EXIT_BLOCK_PTR_FOR_FN(FN)	     ((FN)->cfg->x_exit_block_ptr)
-#define basic_block_info_for_function(FN)    ((FN)->cfg->x_basic_block_info)
+#define basic_block_info_for_fn(FN)	     ((FN)->cfg->x_basic_block_info)
 #define n_basic_blocks_for_fn(FN)	     ((FN)->cfg->x_n_basic_blocks)
 #define n_edges_for_fn(FN)		     ((FN)->cfg->x_n_edges)
 #define last_basic_block_for_function(FN)    ((FN)->cfg->x_last_basic_block)
 #define label_to_block_map_for_function(FN)  ((FN)->cfg->x_label_to_block_map)
 #define profile_status_for_function(FN)	     ((FN)->cfg->x_profile_status)
 
-#define BASIC_BLOCK_FOR_FUNCTION(FN,N) \
-  ((*basic_block_info_for_function (FN))[(N)])
-#define SET_BASIC_BLOCK_FOR_FUNCTION(FN,N,BB) \
-  ((*basic_block_info_for_function (FN))[(N)] = (BB))
+#define BASIC_BLOCK_FOR_FN(FN,N) \
+  ((*basic_block_info_for_fn (FN))[(N)])
+#define SET_BASIC_BLOCK_FOR_FN(FN,N,BB) \
+  ((*basic_block_info_for_fn (FN))[(N)] = (BB))
 
 /* Defines for textual backward source compatibility.  */
 #define basic_block_info	(cfun->cfg->x_basic_block_info)
diff --git a/gcc/gimple-streamer-in.c b/gcc/gimple-streamer-in.c
index 57b0d87..bc85ae9 100644
--- a/gcc/gimple-streamer-in.c
+++ b/gcc/gimple-streamer-in.c
@@ -67,7 +67,7 @@ input_phi (struct lto_input_block *ib, basic_block bb, struct data_in *data_in,
       int src_index = streamer_read_uhwi (ib);
       bitpack_d bp = streamer_read_bitpack (ib);
       location_t arg_loc = stream_input_location (&bp, data_in);
-      basic_block sbb = BASIC_BLOCK_FOR_FUNCTION (fn, src_index);
+      basic_block sbb = BASIC_BLOCK_FOR_FN (fn, src_index);
 
       edge e = NULL;
       int j;
@@ -258,7 +258,7 @@ input_bb (struct lto_input_block *ib, enum LTO_tags tag,
   gcc_assert (cfun == fn);
 
   index = streamer_read_uhwi (ib);
-  bb = BASIC_BLOCK_FOR_FUNCTION (fn, index);
+  bb = BASIC_BLOCK_FOR_FN (fn, index);
 
   bb->count = apply_scale (streamer_read_gcov_count (ib),
                            count_materialization_scale);
diff --git a/gcc/ipa-utils.c b/gcc/ipa-utils.c
index 312d75d..0253bb0 100644
--- a/gcc/ipa-utils.c
+++ b/gcc/ipa-utils.c
@@ -727,7 +727,7 @@ ipa_merge_profiles (struct cgraph_node *dst,
 	{
 	  unsigned int i;
 
-	  dstbb = BASIC_BLOCK_FOR_FUNCTION (dstcfun, srcbb->index);
+	  dstbb = BASIC_BLOCK_FOR_FN (dstcfun, srcbb->index);
 	  if (dstbb == NULL)
 	    {
 	      if (cgraph_dump_file)
@@ -772,7 +772,7 @@ ipa_merge_profiles (struct cgraph_node *dst,
 	{
 	  unsigned int i;
 
-	  dstbb = BASIC_BLOCK_FOR_FUNCTION (dstcfun, srcbb->index);
+	  dstbb = BASIC_BLOCK_FOR_FN (dstcfun, srcbb->index);
 	  dstbb->count += srcbb->count;
 	  for (i = 0; i < EDGE_COUNT (srcbb->succs); i++)
 	    {
diff --git a/gcc/lto-streamer-in.c b/gcc/lto-streamer-in.c
index 862e49d..5a604d3 100644
--- a/gcc/lto-streamer-in.c
+++ b/gcc/lto-streamer-in.c
@@ -611,7 +611,7 @@ make_new_block (struct function *fn, unsigned int index)
 {
   basic_block bb = alloc_block ();
   bb->index = index;
-  SET_BASIC_BLOCK_FOR_FUNCTION (fn, index, bb);
+  SET_BASIC_BLOCK_FOR_FN (fn, index, bb);
   n_basic_blocks_for_fn (fn)++;
   return bb;
 }
@@ -638,8 +638,8 @@ input_cfg (struct lto_input_block *ib, struct data_in *data_in,
   bb_count = streamer_read_uhwi (ib);
 
   last_basic_block_for_function (fn) = bb_count;
-  if (bb_count > basic_block_info_for_function (fn)->length ())
-    vec_safe_grow_cleared (basic_block_info_for_function (fn), bb_count);
+  if (bb_count > basic_block_info_for_fn (fn)->length ())
+    vec_safe_grow_cleared (basic_block_info_for_fn (fn), bb_count);
 
   if (bb_count > label_to_block_map_for_function (fn)->length ())
     vec_safe_grow_cleared (label_to_block_map_for_function (fn), bb_count);
@@ -647,7 +647,7 @@ input_cfg (struct lto_input_block *ib, struct data_in *data_in,
   index = streamer_read_hwi (ib);
   while (index != -1)
     {
-      basic_block bb = BASIC_BLOCK_FOR_FUNCTION (fn, index);
+      basic_block bb = BASIC_BLOCK_FOR_FN (fn, index);
       unsigned int edge_count;
 
       if (bb == NULL)
@@ -671,7 +671,7 @@ input_cfg (struct lto_input_block *ib, struct data_in *data_in,
                                count_materialization_scale);
 	  edge_flags = streamer_read_uhwi (ib);
 
-	  dest = BASIC_BLOCK_FOR_FUNCTION (fn, dest_index);
+	  dest = BASIC_BLOCK_FOR_FN (fn, dest_index);
 
 	  if (dest == NULL)
 	    dest = make_new_block (fn, dest_index);
@@ -688,7 +688,7 @@ input_cfg (struct lto_input_block *ib, struct data_in *data_in,
   index = streamer_read_hwi (ib);
   while (index != -1)
     {
-      basic_block bb = BASIC_BLOCK_FOR_FUNCTION (fn, index);
+      basic_block bb = BASIC_BLOCK_FOR_FN (fn, index);
       bb->prev_bb = p_bb;
       p_bb->next_bb = bb;
       p_bb = bb;
@@ -719,7 +719,7 @@ input_cfg (struct lto_input_block *ib, struct data_in *data_in,
 	}
 
       struct loop *loop = alloc_loop ();
-      loop->header = BASIC_BLOCK_FOR_FUNCTION (fn, header_index);
+      loop->header = BASIC_BLOCK_FOR_FN (fn, header_index);
       loop->header->loop_father = loop;
 
       /* Read everything copy_loop_info copies.  */
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 6d1ebe6..e4a1371 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -185,8 +185,8 @@ init_empty_tree_cfg_for_function (struct function *fn)
   profile_status_for_function (fn) = PROFILE_ABSENT;
   n_basic_blocks_for_fn (fn) = NUM_FIXED_BLOCKS;
   last_basic_block_for_function (fn) = NUM_FIXED_BLOCKS;
-  vec_alloc (basic_block_info_for_function (fn), initial_cfg_capacity);
-  vec_safe_grow_cleared (basic_block_info_for_function (fn),
+  vec_alloc (basic_block_info_for_fn (fn), initial_cfg_capacity);
+  vec_safe_grow_cleared (basic_block_info_for_fn (fn),
 			 initial_cfg_capacity);
 
   /* Build a mapping of labels to their associated blocks.  */
@@ -194,10 +194,8 @@ init_empty_tree_cfg_for_function (struct function *fn)
   vec_safe_grow_cleared (label_to_block_map_for_function (fn),
 			 initial_cfg_capacity);
 
-  SET_BASIC_BLOCK_FOR_FUNCTION (fn, ENTRY_BLOCK,
-				ENTRY_BLOCK_PTR_FOR_FN (fn));
-  SET_BASIC_BLOCK_FOR_FUNCTION (fn, EXIT_BLOCK,
-		   EXIT_BLOCK_PTR_FOR_FN (fn));
+  SET_BASIC_BLOCK_FOR_FN (fn, ENTRY_BLOCK, ENTRY_BLOCK_PTR_FOR_FN (fn));
+  SET_BASIC_BLOCK_FOR_FN (fn, EXIT_BLOCK, EXIT_BLOCK_PTR_FOR_FN (fn));
 
   ENTRY_BLOCK_PTR_FOR_FN (fn)->next_bb
     = EXIT_BLOCK_PTR_FOR_FN (fn);
@@ -7046,7 +7044,7 @@ dump_function_to_file (tree fndecl, FILE *file, int flags)
 
   if (fun && fun->decl == fndecl
       && fun->cfg
-      && basic_block_info_for_function (fun))
+      && basic_block_info_for_fn (fun))
     {
       /* If the CFG has been built, emit a CFG-based dump.  */
       if (!ignore_topmost_bind)
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h
  2013-11-20  1:12                 ` Committed: removal of ENTRY_BLOCK_PTR and EXIT_BLOCK_PTR macros David Malcolm
@ 2013-12-06 14:52                   ` David Malcolm
  2013-12-06 14:52                     ` [PATCH 01/13] Rename macros (basic_block_info_for_function, BASIC_BLOCK_FOR_FUNCTION, SET_BASIC_BLOCK_FOR_FUNCTION) David Malcolm
                                       ` (13 more replies)
  0 siblings, 14 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 14:52 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

I have a series of 13 follow-up patches which remove the remaining
"cfun"-using macros from basic-block.h

Successfully bootstrapped&regtested on x86_64-unknown-linux-gnu.

These were pre-approved in stage1, and are mechanical in nature [1]

I'd like to apply these to trunk now, but given that we're now in
stage3, do I need to wait until the next stage1?

The first 4 patches rename various "_for_function|_FOR_FUNCTION"
macros to "_for_fn|_FOR_FN" for consistency with the earlier
patches in this thread.

The remaining patches eliminate cfun-using macros in favor of
the "_for_fn|_FOR_FN" variant, making uses of cfun explicit.
There are still some macros in function.h that implicitly use
cfun, but it's less clear what to replace them with.

Note to self: here's a grep invocation for ensuring that no new
uses sneak into the sources:
for m in \
  basic_block_info_for_function BASIC_BLOCK_FOR_FUNCTION \
  SET_BASIC_BLOCK_FOR_FUNCTION last_basic_block_for_function \
  label_to_block_map_for_function profile_status_for_function \
  SET_BASIC_BLOCK BASIC_BLOCK basic_block_info label_to_block_map \
  profile_status last_basic_block FOR_EACH_BB FOR_EACH_BB_REVERSE \
  FOR_ALL_BB ; 
do
  grep -nH -E -w $m \
     gcc/*.[ch] gcc/config/*.[ch] gcc/config/*/*.{c,h,md} ; 
done

(this currently has 11 false-positives)

[1] with one exception, in patch 10 in gcc/ira-emit.c (ira_emit) where
I introduced a new local to avoid overlong lines.

David Malcolm (13):
  Rename macros (basic_block_info_for_function,
    BASIC_BLOCK_FOR_FUNCTION, SET_BASIC_BLOCK_FOR_FUNCTION)
  Rename last_basic_block_for_function to last_basic_block_for_fn.
  Rename label_to_block_map_for_function to label_to_block_map_for_fn.
  Rename profile_status_for_function to profile_status_for_fn.
  Eliminate SET_BASIC_BLOCK macro.
  Eliminate BASIC_BLOCK macro.
  Eliminate basic_block_info macro.
  Eliminate label_to_block_map macro.
  Eliminate profile_status macro.
  Eliminate last_basic_block macro.
  Eliminate FOR_EACH_BB macro.
  Eliminate FOR_EACH_BB_REVERSE macro.
  Eliminate FOR_ALL_BB macro.

 gcc/alias.c                              |   2 +-
 gcc/asan.c                               |   6 +-
 gcc/auto-inc-dec.c                       |   2 +-
 gcc/basic-block.h                        |  32 +++------
 gcc/bb-reorder.c                         |  29 ++++----
 gcc/bt-load.c                            |  45 ++++++------
 gcc/caller-save.c                        |   8 +--
 gcc/cfg.c                                |  32 ++++-----
 gcc/cfganal.c                            |  35 +++++-----
 gcc/cfgbuild.c                           |  12 ++--
 gcc/cfgcleanup.c                         |   6 +-
 gcc/cfgexpand.c                          |  14 ++--
 gcc/cfghooks.c                           |  16 ++---
 gcc/cfgloop.c                            |  20 +++---
 gcc/cfgloopanal.c                        |   8 +--
 gcc/cfgloopmanip.c                       |   6 +-
 gcc/cfgrtl.c                             |  61 ++++++++--------
 gcc/cgraphbuild.c                        |   8 +--
 gcc/combine-stack-adj.c                  |   2 +-
 gcc/combine.c                            |   8 +--
 gcc/config/arm/arm.c                     |   4 +-
 gcc/config/bfin/bfin.c                   |   4 +-
 gcc/config/c6x/c6x.c                     |   6 +-
 gcc/config/epiphany/resolve-sw-modes.c   |   6 +-
 gcc/config/frv/frv.c                     |   8 +--
 gcc/config/i386/i386.c                   |   2 +-
 gcc/config/ia64/ia64.c                   |   6 +-
 gcc/config/mips/mips.c                   |   8 +--
 gcc/config/picochip/picochip.c           |   2 +-
 gcc/config/rs6000/rs6000.c               |   2 +-
 gcc/config/s390/s390.c                   |   4 +-
 gcc/config/sh/sh.c                       |   2 +-
 gcc/config/spu/spu.c                     |   6 +-
 gcc/config/tilegx/tilegx.c               |   4 +-
 gcc/config/tilepro/tilepro.c             |   4 +-
 gcc/coverage.c                           |   2 +-
 gcc/cprop.c                              |  23 ++++---
 gcc/cse.c                                |   8 +--
 gcc/dce.c                                |  10 +--
 gcc/df-core.c                            |  68 +++++++++---------
 gcc/df-problems.c                        |  54 +++++++--------
 gcc/df-scan.c                            |  42 ++++++-----
 gcc/df.h                                 |   2 +-
 gcc/dominance.c                          |  37 +++++-----
 gcc/domwalk.c                            |   2 +-
 gcc/dse.c                                |  14 ++--
 gcc/except.c                             |   2 +-
 gcc/final.c                              |   6 +-
 gcc/function.c                           |  16 ++---
 gcc/gcse.c                               |  54 ++++++++-------
 gcc/gimple-iterator.c                    |   2 +-
 gcc/gimple-ssa-isolate-paths.c           |   4 +-
 gcc/gimple-streamer-in.c                 |   4 +-
 gcc/gimple.c                             |   8 ++-
 gcc/graph.c                              |   6 +-
 gcc/graphite-scop-detection.c            |   6 +-
 gcc/graphite-sese-to-poly.c              |   6 +-
 gcc/graphite.c                           |   6 +-
 gcc/haifa-sched.c                        |   4 +-
 gcc/hw-doloop.c                          |   6 +-
 gcc/ifcvt.c                              |   2 +-
 gcc/init-regs.c                          |   2 +-
 gcc/internal-fn.c                        |   6 +-
 gcc/ipa-inline-analysis.c                |   4 +-
 gcc/ipa-prop.c                           |   2 +-
 gcc/ipa-pure-const.c                     |   2 +-
 gcc/ipa-split.c                          |  13 ++--
 gcc/ipa-utils.c                          |   8 +--
 gcc/ira-build.c                          |  15 ++--
 gcc/ira-costs.c                          |   2 +-
 gcc/ira-emit.c                           |  24 ++++---
 gcc/ira.c                                |  42 ++++++-----
 gcc/jump.c                               |   2 +-
 gcc/lcm.c                                | 115 ++++++++++++++++++-------------
 gcc/loop-init.c                          |   6 +-
 gcc/loop-invariant.c                     |   2 +-
 gcc/loop-unroll.c                        |  16 +++--
 gcc/lower-subreg.c                       |   8 +--
 gcc/lra-assigns.c                        |   2 +-
 gcc/lra-coalesce.c                       |   4 +-
 gcc/lra-constraints.c                    |   4 +-
 gcc/lra-eliminations.c                   |   2 +-
 gcc/lra-lives.c                          |   4 +-
 gcc/lra-spills.c                         |   6 +-
 gcc/lra.c                                |  10 +--
 gcc/lto-streamer-in.c                    |  28 ++++----
 gcc/lto-streamer-out.c                   |   8 +--
 gcc/mcf.c                                |   4 +-
 gcc/mode-switching.c                     |  27 ++++----
 gcc/modulo-sched.c                       |   2 +-
 gcc/omp-low.c                            |   6 +-
 gcc/optabs.c                             |   2 +-
 gcc/postreload-gcse.c                    |   4 +-
 gcc/postreload.c                         |   4 +-
 gcc/predict.c                            |  54 +++++++--------
 gcc/profile.c                            |  12 ++--
 gcc/recog.c                              |   6 +-
 gcc/ree.c                                |   2 +-
 gcc/reg-stack.c                          |   6 +-
 gcc/regcprop.c                           |   8 +--
 gcc/reginfo.c                            |   2 +-
 gcc/regrename.c                          |  12 ++--
 gcc/regstat.c                            |   8 +--
 gcc/reload1.c                            |  10 +--
 gcc/resource.c                           |  13 ++--
 gcc/sched-ebb.c                          |   4 +-
 gcc/sched-int.h                          |   5 +-
 gcc/sched-rgn.c                          | 103 +++++++++++++++------------
 gcc/sched-vis.c                          |   2 +-
 gcc/sel-sched-dump.c                     |   2 +-
 gcc/sel-sched-ir.c                       |  35 +++++-----
 gcc/sel-sched.c                          |  22 +++---
 gcc/sese.c                               |   6 +-
 gcc/stack-ptr-mod.c                      |   2 +-
 gcc/store-motion.c                       |  38 +++++-----
 gcc/testsuite/g++.dg/plugin/selfassign.c |   2 +-
 gcc/testsuite/gcc.dg/plugin/selfassign.c |   2 +-
 gcc/tracer.c                             |   8 +--
 gcc/trans-mem.c                          |  15 ++--
 gcc/tree-call-cdce.c                     |   2 +-
 gcc/tree-cfg.c                           | 108 +++++++++++++++--------------
 gcc/tree-cfgcleanup.c                    |  16 ++---
 gcc/tree-complex.c                       |   6 +-
 gcc/tree-dfa.c                           |   6 +-
 gcc/tree-eh.c                            |   6 +-
 gcc/tree-emutls.c                        |   2 +-
 gcc/tree-if-conv.c                       |   2 +-
 gcc/tree-inline.c                        |  32 +++++----
 gcc/tree-into-ssa.c                      |  45 ++++++------
 gcc/tree-loop-distribution.c             |   2 +-
 gcc/tree-nrv.c                           |   6 +-
 gcc/tree-object-size.c                   |   2 +-
 gcc/tree-outof-ssa.c                     |   6 +-
 gcc/tree-profile.c                       |   2 +-
 gcc/tree-scalar-evolution.c              |   2 +-
 gcc/tree-sra.c                           |  14 ++--
 gcc/tree-ssa-ccp.c                       |   6 +-
 gcc/tree-ssa-coalesce.c                  |   6 +-
 gcc/tree-ssa-copy.c                      |   2 +-
 gcc/tree-ssa-copyrename.c                |   4 +-
 gcc/tree-ssa-dce.c                       |  13 ++--
 gcc/tree-ssa-dom.c                       |   8 +--
 gcc/tree-ssa-forwprop.c                  |   2 +-
 gcc/tree-ssa-live.c                      |  32 ++++-----
 gcc/tree-ssa-loop-im.c                   |   8 +--
 gcc/tree-ssa-loop-manip.c                |  24 +++----
 gcc/tree-ssa-math-opts.c                 |  10 +--
 gcc/tree-ssa-pre.c                       |  16 ++---
 gcc/tree-ssa-propagate.c                 |   8 +--
 gcc/tree-ssa-reassoc.c                   |   8 ++-
 gcc/tree-ssa-sccvn.c                     |   2 +-
 gcc/tree-ssa-sink.c                      |   4 +-
 gcc/tree-ssa-structalias.c               |   4 +-
 gcc/tree-ssa-tail-merge.c                |  32 ++++-----
 gcc/tree-ssa-ter.c                       |   2 +-
 gcc/tree-ssa-threadupdate.c              |  10 +--
 gcc/tree-ssa-uncprop.c                   |   9 +--
 gcc/tree-ssa-uninit.c                    |   4 +-
 gcc/tree-ssa.c                           |   6 +-
 gcc/tree-stdarg.c                        |   8 +--
 gcc/tree-switch-conversion.c             |   2 +-
 gcc/tree-vect-generic.c                  |   2 +-
 gcc/tree-vectorizer.c                    |   6 +-
 gcc/tree-vrp.c                           |  20 +++---
 gcc/tsan.c                               |   2 +-
 gcc/ubsan.c                              |   2 +-
 gcc/value-prof.c                         |   6 +-
 gcc/var-tracking.c                       |  28 ++++----
 gcc/vtable-verify.c                      |   2 +-
 gcc/web.c                                |   6 +-
 170 files changed, 1112 insertions(+), 1030 deletions(-)

-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 04/13] Rename profile_status_for_function to profile_status_for_fn.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
  2013-12-06 14:52                     ` [PATCH 01/13] Rename macros (basic_block_info_for_function, BASIC_BLOCK_FOR_FUNCTION, SET_BASIC_BLOCK_FOR_FUNCTION) David Malcolm
@ 2013-12-06 14:53                     ` David Malcolm
  2013-12-06 14:53                     ` [PATCH 03/13] Rename label_to_block_map_for_function to label_to_block_map_for_fn David Malcolm
                                       ` (11 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 14:53 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (profile_status_for_function): Rename to...
	(profile_status_for_fn): ...this.

	* cfg.c (check_bb_profile): Update for renaming.
	* cgraphbuild.c (compute_call_stmt_bb_frequency): Likewise.
	* lto-streamer-in.c (input_cfg): Likewise.
	* lto-streamer-out.c (output_cfg):  Likewise.
	* predict.c (maybe_hot_frequency_p, maybe_hot_count_p,
	maybe_hot_bb_p, probably_never_executed)
	(handle_missing_profiles): Likewise.
	* tree-cfg.c (init_empty_tree_cfg_for_function): Likewise.
	* tree-inline.c (copy_bb, initialize_cfun): Likewise.
---
 gcc/basic-block.h      |  2 +-
 gcc/cfg.c              |  2 +-
 gcc/cgraphbuild.c      |  2 +-
 gcc/lto-streamer-in.c  |  4 ++--
 gcc/lto-streamer-out.c |  2 +-
 gcc/predict.c          | 12 ++++++------
 gcc/tree-cfg.c         |  2 +-
 gcc/tree-inline.c      |  4 ++--
 8 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 1471972..da93c6f 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -319,7 +319,7 @@ struct GTY(()) control_flow_graph {
 #define n_edges_for_fn(FN)		     ((FN)->cfg->x_n_edges)
 #define last_basic_block_for_fn(FN)	     ((FN)->cfg->x_last_basic_block)
 #define label_to_block_map_for_fn(FN)	     ((FN)->cfg->x_label_to_block_map)
-#define profile_status_for_function(FN)	     ((FN)->cfg->x_profile_status)
+#define profile_status_for_fn(FN)	     ((FN)->cfg->x_profile_status)
 
 #define BASIC_BLOCK_FOR_FN(FN,N) \
   ((*basic_block_info_for_fn (FN))[(N)])
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 6bceca5..786fe48 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -408,7 +408,7 @@ check_bb_profile (basic_block bb, FILE * file, int indent, int flags)
   memset ((void *) s_indent, ' ', (size_t) indent);
   s_indent[indent] = '\0';
 
-  if (profile_status_for_function (fun) == PROFILE_ABSENT)
+  if (profile_status_for_fn (fun) == PROFILE_ABSENT)
     return;
 
   if (bb != EXIT_BLOCK_PTR_FOR_FN (fun))
diff --git a/gcc/cgraphbuild.c b/gcc/cgraphbuild.c
index 9a63982..6c6698b 100644
--- a/gcc/cgraphbuild.c
+++ b/gcc/cgraphbuild.c
@@ -208,7 +208,7 @@ compute_call_stmt_bb_frequency (tree decl, basic_block bb)
   		     (DECL_STRUCT_FUNCTION (decl))->frequency;
   int freq = bb->frequency;
 
-  if (profile_status_for_function (DECL_STRUCT_FUNCTION (decl)) == PROFILE_ABSENT)
+  if (profile_status_for_fn (DECL_STRUCT_FUNCTION (decl)) == PROFILE_ABSENT)
     return CGRAPH_FREQ_BASE;
 
   if (!entry_freq)
diff --git a/gcc/lto-streamer-in.c b/gcc/lto-streamer-in.c
index 91fb12d..8dc94bd 100644
--- a/gcc/lto-streamer-in.c
+++ b/gcc/lto-streamer-in.c
@@ -632,8 +632,8 @@ input_cfg (struct lto_input_block *ib, struct data_in *data_in,
   init_empty_tree_cfg_for_function (fn);
   init_ssa_operands (fn);
 
-  profile_status_for_function (fn) = streamer_read_enum (ib, profile_status_d,
-							 PROFILE_LAST);
+  profile_status_for_fn (fn) = streamer_read_enum (ib, profile_status_d,
+						   PROFILE_LAST);
 
   bb_count = streamer_read_uhwi (ib);
 
diff --git a/gcc/lto-streamer-out.c b/gcc/lto-streamer-out.c
index 858d49e..615cc84 100644
--- a/gcc/lto-streamer-out.c
+++ b/gcc/lto-streamer-out.c
@@ -1630,7 +1630,7 @@ output_cfg (struct output_block *ob, struct function *fn)
   ob->main_stream = ob->cfg_stream;
 
   streamer_write_enum (ob->main_stream, profile_status_d, PROFILE_LAST,
-		       profile_status_for_function (fn));
+		       profile_status_for_fn (fn));
 
   /* Output the number of the highest basic block.  */
   streamer_write_uhwi (ob, last_basic_block_for_fn (fn));
diff --git a/gcc/predict.c b/gcc/predict.c
index 1cd3fa6..e959a3b 100644
--- a/gcc/predict.c
+++ b/gcc/predict.c
@@ -121,7 +121,7 @@ maybe_hot_frequency_p (struct function *fun, int freq)
       if (node->frequency == NODE_FREQUENCY_HOT)
         return true;
     }
-  if (profile_status_for_function (fun) == PROFILE_ABSENT)
+  if (profile_status_for_fn (fun) == PROFILE_ABSENT)
     return true;
   if (node->frequency == NODE_FREQUENCY_EXECUTED_ONCE
       && freq < (ENTRY_BLOCK_PTR_FOR_FN (fun)->frequency * 2 / 3))
@@ -164,7 +164,7 @@ set_hot_bb_threshold (gcov_type min)
 static inline bool
 maybe_hot_count_p (struct function *fun, gcov_type count)
 {
-  if (fun && profile_status_for_function (fun) != PROFILE_READ)
+  if (fun && profile_status_for_fn (fun) != PROFILE_READ)
     return true;
   /* Code executed at most once is not hot.  */
   if (profile_info->runs >= count)
@@ -179,7 +179,7 @@ bool
 maybe_hot_bb_p (struct function *fun, const_basic_block bb)
 {
   gcc_checking_assert (fun);
-  if (profile_status_for_function (fun) == PROFILE_READ)
+  if (profile_status_for_fn (fun) == PROFILE_READ)
     return maybe_hot_count_p (fun, bb->count);
   return maybe_hot_frequency_p (fun, bb->frequency);
 }
@@ -239,7 +239,7 @@ probably_never_executed (struct function *fun,
                          gcov_type count, int frequency)
 {
   gcc_checking_assert (fun);
-  if (profile_status_for_function (fun) == PROFILE_READ)
+  if (profile_status_for_fn (fun) == PROFILE_READ)
     {
       int unlikely_count_fraction = PARAM_VALUE (UNLIKELY_BB_COUNT_FRACTION);
       if (count * unlikely_count_fraction >= profile_info->runs)
@@ -2806,7 +2806,7 @@ drop_profile (struct cgraph_node *node, gcov_type call_count)
                  node->name (), node->order);
     }
 
-  profile_status_for_function (fn)
+  profile_status_for_fn (fn)
       = (flag_guess_branch_prob ? PROFILE_GUESSED : PROFILE_ABSENT);
   node->frequency
       = hot ? NODE_FREQUENCY_HOT : NODE_FREQUENCY_NORMAL;
@@ -2869,7 +2869,7 @@ handle_missing_profiles (void)
           if (callee->count > 0)
             continue;
           if (DECL_COMDAT (callee->decl) && fn && fn->cfg
-              && profile_status_for_function (fn) == PROFILE_READ)
+              && profile_status_for_fn (fn) == PROFILE_READ)
             {
               drop_profile (node, 0);
               worklist.safe_push (callee);
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 998ee26..6c2cc16 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -182,7 +182,7 @@ init_empty_tree_cfg_for_function (struct function *fn)
 {
   /* Initialize the basic block array.  */
   init_flow (fn);
-  profile_status_for_function (fn) = PROFILE_ABSENT;
+  profile_status_for_fn (fn) = PROFILE_ABSENT;
   n_basic_blocks_for_fn (fn) = NUM_FIXED_BLOCKS;
   last_basic_block_for_fn (fn) = NUM_FIXED_BLOCKS;
   vec_alloc (basic_block_info_for_fn (fn), initial_cfg_capacity);
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index f42ade02..abc216d 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -1792,7 +1792,7 @@ copy_bb (copy_body_data *id, basic_block bb, int frequency_scale,
 			{
 			  edge->frequency = new_freq;
 			  if (dump_file
-			      && profile_status_for_function (cfun) != PROFILE_ABSENT
+			      && profile_status_for_fn (cfun) != PROFILE_ABSENT
 			      && (edge_freq > edge->frequency + 10
 				  || edge_freq < edge->frequency - 10))
 			    {
@@ -2208,7 +2208,7 @@ initialize_cfun (tree new_fndecl, tree callee_fndecl, gcov_type count)
 
   init_empty_tree_cfg ();
 
-  profile_status_for_function (cfun) = profile_status_for_function (src_cfun);
+  profile_status_for_fn (cfun) = profile_status_for_fn (src_cfun);
   ENTRY_BLOCK_PTR_FOR_FN (cfun)->count =
     (ENTRY_BLOCK_PTR_FOR_FN (src_cfun)->count * count_scale /
      REG_BR_PROB_BASE);
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 03/13] Rename label_to_block_map_for_function to label_to_block_map_for_fn.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
  2013-12-06 14:52                     ` [PATCH 01/13] Rename macros (basic_block_info_for_function, BASIC_BLOCK_FOR_FUNCTION, SET_BASIC_BLOCK_FOR_FUNCTION) David Malcolm
  2013-12-06 14:53                     ` [PATCH 04/13] Rename profile_status_for_function to profile_status_for_fn David Malcolm
@ 2013-12-06 14:53                     ` David Malcolm
  2013-12-06 14:53                     ` [PATCH 07/13] Eliminate basic_block_info macro David Malcolm
                                       ` (10 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 14:53 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (label_to_block_map_for_function): Rename to...
	(label_to_block_map_for_fn): ...this.
	* lto-streamer-in.c (input_cfg): Update for renaming.
	* tree-cfg.c (init_empty_tree_cfg_for_function): Likewise.
---
 gcc/basic-block.h     | 2 +-
 gcc/lto-streamer-in.c | 4 ++--
 gcc/tree-cfg.c        | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 88b0e48..1471972 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -318,7 +318,7 @@ struct GTY(()) control_flow_graph {
 #define n_basic_blocks_for_fn(FN)	     ((FN)->cfg->x_n_basic_blocks)
 #define n_edges_for_fn(FN)		     ((FN)->cfg->x_n_edges)
 #define last_basic_block_for_fn(FN)	     ((FN)->cfg->x_last_basic_block)
-#define label_to_block_map_for_function(FN)  ((FN)->cfg->x_label_to_block_map)
+#define label_to_block_map_for_fn(FN)	     ((FN)->cfg->x_label_to_block_map)
 #define profile_status_for_function(FN)	     ((FN)->cfg->x_profile_status)
 
 #define BASIC_BLOCK_FOR_FN(FN,N) \
diff --git a/gcc/lto-streamer-in.c b/gcc/lto-streamer-in.c
index 9ad4f5f..91fb12d 100644
--- a/gcc/lto-streamer-in.c
+++ b/gcc/lto-streamer-in.c
@@ -641,8 +641,8 @@ input_cfg (struct lto_input_block *ib, struct data_in *data_in,
   if (bb_count > basic_block_info_for_fn (fn)->length ())
     vec_safe_grow_cleared (basic_block_info_for_fn (fn), bb_count);
 
-  if (bb_count > label_to_block_map_for_function (fn)->length ())
-    vec_safe_grow_cleared (label_to_block_map_for_function (fn), bb_count);
+  if (bb_count > label_to_block_map_for_fn (fn)->length ())
+    vec_safe_grow_cleared (label_to_block_map_for_fn (fn), bb_count);
 
   index = streamer_read_hwi (ib);
   while (index != -1)
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 3df4cbe..998ee26 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -190,8 +190,8 @@ init_empty_tree_cfg_for_function (struct function *fn)
 			 initial_cfg_capacity);
 
   /* Build a mapping of labels to their associated blocks.  */
-  vec_alloc (label_to_block_map_for_function (fn), initial_cfg_capacity);
-  vec_safe_grow_cleared (label_to_block_map_for_function (fn),
+  vec_alloc (label_to_block_map_for_fn (fn), initial_cfg_capacity);
+  vec_safe_grow_cleared (label_to_block_map_for_fn (fn),
 			 initial_cfg_capacity);
 
   SET_BASIC_BLOCK_FOR_FN (fn, ENTRY_BLOCK, ENTRY_BLOCK_PTR_FOR_FN (fn));
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 12/13] Eliminate FOR_EACH_BB_REVERSE macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (5 preceding siblings ...)
  2013-12-06 14:53                     ` [PATCH 05/13] Eliminate SET_BASIC_BLOCK macro David Malcolm
@ 2013-12-06 14:53                     ` David Malcolm
  2013-12-07  7:14                       ` Oleg Endo
  2013-12-06 15:08                     ` [PATCH 11/13] Eliminate FOR_EACH_BB macro David Malcolm
                                       ` (6 subsequent siblings)
  13 siblings, 1 reply; 42+ messages in thread
From: David Malcolm @ 2013-12-06 14:53 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (FOR_EACH_BB_REVERSE): Eliminate macro.

	* cfghooks.c (verify_flow_info): Replace uses of FOR_EACH_BB_REVERSE
	with FOR_EACH_BB_REVERSE_FN, making uses of cfun explicit.
	* cfgrtl.c (print_rtl_with_bb, rtl_verify_edges,
	rtl_verify_bb_insns, rtl_verify_bb_pointers,
	rtl_verify_bb_insn_chain, rtl_verify_fallthru): Likewise.
	* config/ia64/ia64.c (emit_predicate_relation_info): Likewise.
	* config/sh/sh.c (sh_md_init_global): Likewise.
	* dce.c (reset_unmarked_insns_debug_uses, delete_unmarked_insns):
	Likewise.
	* dominance.c (calc_dfs_tree): Likewise.
	* final.c (final): Likewise.
	* function.c (thread_prologue_and_epilogue_insns): Likewise.
	* gcse.c (compute_code_hoist_vbeinout): Likewise.
	* ira.c (update_equiv_regs, build_insn_chain): Likewise.
	* lcm.c (compute_antinout_edge): Likewise.
	* mode-switching.c (optimize_mode_switching): Likewise.
	* postreload.c (reload_combine): Likewise.
	* recog.c (split_all_insns, peephole2_optimize): Likewise.
	* tree-ssa-live.c (live_worklist): Likewise.
---
 gcc/basic-block.h      |  2 --
 gcc/cfghooks.c         |  2 +-
 gcc/cfgrtl.c           | 12 ++++++------
 gcc/config/ia64/ia64.c |  4 ++--
 gcc/config/sh/sh.c     |  2 +-
 gcc/dce.c              |  4 ++--
 gcc/dominance.c        |  4 ++--
 gcc/final.c            |  2 +-
 gcc/function.c         |  2 +-
 gcc/gcse.c             |  2 +-
 gcc/ira.c              |  4 ++--
 gcc/lcm.c              |  2 +-
 gcc/mode-switching.c   |  4 ++--
 gcc/postreload.c       |  2 +-
 gcc/recog.c            |  4 ++--
 gcc/tree-ssa-live.c    |  2 +-
 16 files changed, 26 insertions(+), 28 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index b378a5b..75f16ac 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -336,8 +336,6 @@ struct GTY(()) control_flow_graph {
 #define FOR_EACH_BB_REVERSE_FN(BB, FN) \
   FOR_BB_BETWEEN (BB, (FN)->cfg->x_exit_block_ptr->prev_bb, (FN)->cfg->x_entry_block_ptr, prev_bb)
 
-#define FOR_EACH_BB_REVERSE(BB) FOR_EACH_BB_REVERSE_FN (BB, cfun)
-
 /* For iterating over insns in basic block.  */
 #define FOR_BB_INSNS(BB, INSN)			\
   for ((INSN) = BB_HEAD (BB);			\
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index 2400965..78218b5 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -123,7 +123,7 @@ verify_flow_info (void)
     }
 
   /* Now check the basic blocks (boundaries etc.) */
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       int n_fallthru = 0;
       edge e;
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index daadd9b..7734ac1 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -2153,7 +2153,7 @@ print_rtl_with_bb (FILE *outf, const_rtx rtx_first, int flags)
 
       if (flags & TDF_BLOCKS)
 	{
-	  FOR_EACH_BB_REVERSE (bb)
+	  FOR_EACH_BB_REVERSE_FN (bb, cfun)
 	    {
 	      rtx x;
 
@@ -2408,7 +2408,7 @@ rtl_verify_edges (void)
   int err = 0;
   basic_block bb;
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       int n_fallthru = 0, n_branch = 0, n_abnormal_call = 0, n_sibcall = 0;
       int n_eh = 0, n_abnormal = 0;
@@ -2586,7 +2586,7 @@ rtl_verify_bb_insns (void)
   int err = 0;
   basic_block bb;
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       /* Now check the header of basic
 	 block.  It ought to contain optional CODE_LABEL followed
@@ -2649,7 +2649,7 @@ rtl_verify_bb_pointers (void)
   basic_block bb;
 
   /* Check the general integrity of the basic blocks.  */
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       rtx insn;
 
@@ -2739,7 +2739,7 @@ rtl_verify_bb_insn_chain (void)
 
   bb_info = XCNEWVEC (basic_block, max_uid);
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       rtx head = BB_HEAD (bb);
       rtx end = BB_END (bb);
@@ -2821,7 +2821,7 @@ rtl_verify_fallthru (void)
   basic_block bb;
   int err = 0;
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       edge e;
 
diff --git a/gcc/config/ia64/ia64.c b/gcc/config/ia64/ia64.c
index a837974..99bc094 100644
--- a/gcc/config/ia64/ia64.c
+++ b/gcc/config/ia64/ia64.c
@@ -9613,7 +9613,7 @@ emit_predicate_relation_info (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       int r;
       rtx head = BB_HEAD (bb);
@@ -9641,7 +9641,7 @@ emit_predicate_relation_info (void)
      relations around them.  Otherwise the assembler will assume the call
      returns, and complain about uses of call-clobbered predicates after
      the call.  */
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       rtx insn = BB_HEAD (bb);
 
diff --git a/gcc/config/sh/sh.c b/gcc/config/sh/sh.c
index 3e907b2..26c8957 100644
--- a/gcc/config/sh/sh.c
+++ b/gcc/config/sh/sh.c
@@ -11110,7 +11110,7 @@ sh_md_init_global (FILE *dump ATTRIBUTE_UNUSED,
   regmode_weight[1] = (short *) xcalloc (old_max_uid, sizeof (short));
   r0_life_regions = 0;
 
-  FOR_EACH_BB_REVERSE (b)
+  FOR_EACH_BB_REVERSE_FN (b, cfun)
   {
     find_regmode_weight (b, SImode);
     find_regmode_weight (b, SFmode);
diff --git a/gcc/dce.c b/gcc/dce.c
index 3101102..843dfc6 100644
--- a/gcc/dce.c
+++ b/gcc/dce.c
@@ -511,7 +511,7 @@ reset_unmarked_insns_debug_uses (void)
   basic_block bb;
   rtx insn, next;
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     FOR_BB_INSNS_REVERSE_SAFE (bb, insn, next)
       if (DEBUG_INSN_P (insn))
 	{
@@ -550,7 +550,7 @@ delete_unmarked_insns (void)
   rtx insn, next;
   bool must_clean = false;
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     FOR_BB_INSNS_REVERSE_SAFE (bb, insn, next)
       if (NONDEBUG_INSN_P (insn))
 	{
diff --git a/gcc/dominance.c b/gcc/dominance.c
index 521b224..69816c1 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -357,7 +357,7 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
       basic_block b;
       bool saw_unconnected = false;
 
-      FOR_EACH_BB_REVERSE (b)
+      FOR_EACH_BB_REVERSE_FN (b, cfun)
 	{
 	  if (EDGE_COUNT (b->succs) > 0)
 	    {
@@ -376,7 +376,7 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
 
       if (saw_unconnected)
 	{
-	  FOR_EACH_BB_REVERSE (b)
+	  FOR_EACH_BB_REVERSE_FN (b, cfun)
 	    {
 	      basic_block b2;
 	      if (di->dfs_order[b->index])
diff --git a/gcc/final.c b/gcc/final.c
index f475d27..5526974 100644
--- a/gcc/final.c
+++ b/gcc/final.c
@@ -1996,7 +1996,7 @@ final (rtx first, FILE *file, int optimize_p)
 
       /* There is no cfg for a thunk.  */
       if (!cfun->is_thunk)
-	FOR_EACH_BB_REVERSE (bb)
+	FOR_EACH_BB_REVERSE_FN (bb, cfun)
 	  {
 	    start_to_bb[INSN_UID (BB_HEAD (bb))] = bb;
 	    end_to_bb[INSN_UID (BB_END (bb))] = bb;
diff --git a/gcc/function.c b/gcc/function.c
index e00f583..e2d0e23 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -6236,7 +6236,7 @@ thread_prologue_and_epilogue_insns (void)
 	    }
 	  /* Now duplicate the tails.  */
 	  if (!bitmap_empty_p (&bb_tail))
-	    FOR_EACH_BB_REVERSE (bb)
+	    FOR_EACH_BB_REVERSE_FN (bb, cfun)
 	      {
 		basic_block copy_bb, tbb;
 		rtx insert_point;
diff --git a/gcc/gcse.c b/gcc/gcse.c
index a6874ab..fdf0a57 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -2829,7 +2829,7 @@ compute_code_hoist_vbeinout (void)
 
       /* We scan the blocks in the reverse order to speed up
 	 the convergence.  */
-      FOR_EACH_BB_REVERSE (bb)
+      FOR_EACH_BB_REVERSE_FN (bb, cfun)
 	{
 	  if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	    {
diff --git a/gcc/ira.c b/gcc/ira.c
index b4ae0ca..7403870 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -3772,7 +3772,7 @@ update_equiv_regs (void)
      within the same loop (or in an inner loop), then move the register
      initialization just before the use, so that they are in the same
      basic block.  */
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       loop_depth = bb_loop_depth (bb);
       for (insn = BB_END (bb);
@@ -4127,7 +4127,7 @@ build_insn_chain (void)
   for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
     if (TEST_HARD_REG_BIT (eliminable_regset, i))
       bitmap_set_bit (elim_regset, i);
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       bitmap_iterator bi;
       rtx insn;
diff --git a/gcc/lcm.c b/gcc/lcm.c
index 0b528d9..b5d56e0 100644
--- a/gcc/lcm.c
+++ b/gcc/lcm.c
@@ -109,7 +109,7 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
 
   /* Put every block on the worklist; this is necessary because of the
      optimistic initialization of ANTIN above.  */
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       *qin++ = bb;
       bb->aux = bb;
diff --git a/gcc/mode-switching.c b/gcc/mode-switching.c
index 4e31d68..4f68536 100644
--- a/gcc/mode-switching.c
+++ b/gcc/mode-switching.c
@@ -692,7 +692,7 @@ optimize_mode_switching (void)
 	      insert_insn_on_edge (mode_set, eg);
 	    }
 
-	  FOR_EACH_BB_REVERSE (bb)
+	  FOR_EACH_BB_REVERSE_FN (bb, cfun)
 	    if (bitmap_bit_p (del[bb->index], j))
 	      {
 		make_preds_opaque (bb, j);
@@ -712,7 +712,7 @@ optimize_mode_switching (void)
     {
       int no_mode = num_modes[entity_map[j]];
 
-      FOR_EACH_BB_REVERSE (bb)
+      FOR_EACH_BB_REVERSE_FN (bb, cfun)
 	{
 	  struct seginfo *ptr, *next;
 	  for (ptr = bb_info[j][bb->index].seginfo; ptr; ptr = next)
diff --git a/gcc/postreload.c b/gcc/postreload.c
index bfa5a38..37bd9ff 100644
--- a/gcc/postreload.c
+++ b/gcc/postreload.c
@@ -1281,7 +1281,7 @@ reload_combine (void)
   label_live = XNEWVEC (HARD_REG_SET, n_labels);
   CLEAR_HARD_REG_SET (ever_live_at_start);
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       insn = BB_HEAD (bb);
       if (LABEL_P (insn))
diff --git a/gcc/recog.c b/gcc/recog.c
index c59aa0e..dbd9a8a 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -2902,7 +2902,7 @@ split_all_insns (void)
   bitmap_clear (blocks);
   changed = false;
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       rtx insn, next;
       bool finish = false;
@@ -3556,7 +3556,7 @@ peephole2_optimize (void)
   search_ofs = 0;
   live = BITMAP_ALLOC (&reg_obstack);
 
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     {
       bool past_end = false;
       int pos;
diff --git a/gcc/tree-ssa-live.c b/gcc/tree-ssa-live.c
index da7198b..a37ef85 100644
--- a/gcc/tree-ssa-live.c
+++ b/gcc/tree-ssa-live.c
@@ -1050,7 +1050,7 @@ live_worklist (tree_live_info_p live)
 
   /* Visit all the blocks in reverse order and propagate live on entry values
      into the predecessors blocks.  */
-  FOR_EACH_BB_REVERSE (bb)
+  FOR_EACH_BB_REVERSE_FN (bb, cfun)
     loe_visit_block (live, bb, visited, tmp);
 
   /* Process any blocks which require further iteration.  */
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 07/13] Eliminate basic_block_info macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (2 preceding siblings ...)
  2013-12-06 14:53                     ` [PATCH 03/13] Rename label_to_block_map_for_function to label_to_block_map_for_fn David Malcolm
@ 2013-12-06 14:53                     ` David Malcolm
  2013-12-06 14:53                     ` [PATCH 02/13] Rename last_basic_block_for_function to last_basic_block_for_fn David Malcolm
                                       ` (9 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 14:53 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (basic_block_info): Eliminate macro.

	* cfgrtl.c (rtl_create_basic_block): Replace uses of
	basic_block_info with basic_block_info_for_fn, making uses
	of cfun be explicit.
	* tree-cfg.c (build_gimple_cfg, create_bb): Likewise.
---
 gcc/basic-block.h |  1 -
 gcc/cfgrtl.c      |  4 ++--
 gcc/tree-cfg.c    | 10 ++++++----
 3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 3bd011e..69689f3 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -327,7 +327,6 @@ struct GTY(()) control_flow_graph {
   ((*basic_block_info_for_fn (FN))[(N)] = (BB))
 
 /* Defines for textual backward source compatibility.  */
-#define basic_block_info	(cfun->cfg->x_basic_block_info)
 #define last_basic_block	(cfun->cfg->x_last_basic_block)
 #define label_to_block_map	(cfun->cfg->x_label_to_block_map)
 #define profile_status		(cfun->cfg->x_profile_status)
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index de110f4..772d939 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -355,10 +355,10 @@ rtl_create_basic_block (void *headp, void *endp, basic_block after)
   basic_block bb;
 
   /* Grow the basic block array if needed.  */
-  if ((size_t) last_basic_block >= basic_block_info->length ())
+  if ((size_t) last_basic_block >= basic_block_info_for_fn (cfun)->length ())
     {
       size_t new_size = last_basic_block + (last_basic_block + 3) / 4;
-      vec_safe_grow_cleared (basic_block_info, new_size);
+      vec_safe_grow_cleared (basic_block_info_for_fn (cfun), new_size);
     }
 
   n_basic_blocks_for_fn (cfun)++;
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index a706730..9558546 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -242,8 +242,10 @@ build_gimple_cfg (gimple_seq seq)
     create_empty_bb (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
   /* Adjust the size of the array.  */
-  if (basic_block_info->length () < (size_t) n_basic_blocks_for_fn (cfun))
-    vec_safe_grow_cleared (basic_block_info, n_basic_blocks_for_fn (cfun));
+  if (basic_block_info_for_fn (cfun)->length ()
+      < (size_t) n_basic_blocks_for_fn (cfun))
+    vec_safe_grow_cleared (basic_block_info_for_fn (cfun),
+			   n_basic_blocks_for_fn (cfun));
 
   /* To speed up statement iterator walks, we first purge dead labels.  */
   cleanup_dead_labels ();
@@ -603,10 +605,10 @@ create_bb (void *h, void *e, basic_block after)
   link_block (bb, after);
 
   /* Grow the basic block array if needed.  */
-  if ((size_t) last_basic_block == basic_block_info->length ())
+  if ((size_t) last_basic_block == basic_block_info_for_fn (cfun)->length ())
     {
       size_t new_size = last_basic_block + (last_basic_block + 3) / 4;
-      vec_safe_grow_cleared (basic_block_info, new_size);
+      vec_safe_grow_cleared (basic_block_info_for_fn (cfun), new_size);
     }
 
   /* Add the newly created block to the array.  */
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 02/13] Rename last_basic_block_for_function to last_basic_block_for_fn.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (3 preceding siblings ...)
  2013-12-06 14:53                     ` [PATCH 07/13] Eliminate basic_block_info macro David Malcolm
@ 2013-12-06 14:53                     ` David Malcolm
  2013-12-06 14:53                     ` [PATCH 05/13] Eliminate SET_BASIC_BLOCK macro David Malcolm
                                       ` (8 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 14:53 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (last_basic_block_for_function): Rename to...
	(last_basic_block_for_fn): ...this.
	* ipa-utils.c (ipa_merge_profiles): Update for renaming of
	last_basic_block_for_function to last_basic_block_for_fn.
	* lto-streamer-in.c (input_cfg): Likewise.
	* lto-streamer-out.c (output_cfg): Likewise.
	* tree-cfg.c (init_empty_tree_cfg_for_function): Likewise.
	* tree-sra.c (propagate_dereference_distances, ipa_early_sra):
	Likewise.
---
 gcc/basic-block.h      | 2 +-
 gcc/ipa-utils.c        | 4 ++--
 gcc/lto-streamer-in.c  | 2 +-
 gcc/lto-streamer-out.c | 2 +-
 gcc/tree-cfg.c         | 2 +-
 gcc/tree-sra.c         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 234f6e9..88b0e48 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -317,7 +317,7 @@ struct GTY(()) control_flow_graph {
 #define basic_block_info_for_fn(FN)	     ((FN)->cfg->x_basic_block_info)
 #define n_basic_blocks_for_fn(FN)	     ((FN)->cfg->x_n_basic_blocks)
 #define n_edges_for_fn(FN)		     ((FN)->cfg->x_n_edges)
-#define last_basic_block_for_function(FN)    ((FN)->cfg->x_last_basic_block)
+#define last_basic_block_for_fn(FN)	     ((FN)->cfg->x_last_basic_block)
 #define label_to_block_map_for_function(FN)  ((FN)->cfg->x_label_to_block_map)
 #define profile_status_for_function(FN)	     ((FN)->cfg->x_profile_status)
 
diff --git a/gcc/ipa-utils.c b/gcc/ipa-utils.c
index 0253bb0..569626d 100644
--- a/gcc/ipa-utils.c
+++ b/gcc/ipa-utils.c
@@ -711,8 +711,8 @@ ipa_merge_profiles (struct cgraph_node *dst,
 		 "Giving up; number of basic block mismatch.\n");
       match = false;
     }
-  else if (last_basic_block_for_function (srccfun)
-	   != last_basic_block_for_function (dstcfun))
+  else if (last_basic_block_for_fn (srccfun)
+	   != last_basic_block_for_fn (dstcfun))
     {
       if (cgraph_dump_file)
 	fprintf (cgraph_dump_file,
diff --git a/gcc/lto-streamer-in.c b/gcc/lto-streamer-in.c
index 5a604d3..9ad4f5f 100644
--- a/gcc/lto-streamer-in.c
+++ b/gcc/lto-streamer-in.c
@@ -637,7 +637,7 @@ input_cfg (struct lto_input_block *ib, struct data_in *data_in,
 
   bb_count = streamer_read_uhwi (ib);
 
-  last_basic_block_for_function (fn) = bb_count;
+  last_basic_block_for_fn (fn) = bb_count;
   if (bb_count > basic_block_info_for_fn (fn)->length ())
     vec_safe_grow_cleared (basic_block_info_for_fn (fn), bb_count);
 
diff --git a/gcc/lto-streamer-out.c b/gcc/lto-streamer-out.c
index e99424e..858d49e 100644
--- a/gcc/lto-streamer-out.c
+++ b/gcc/lto-streamer-out.c
@@ -1633,7 +1633,7 @@ output_cfg (struct output_block *ob, struct function *fn)
 		       profile_status_for_function (fn));
 
   /* Output the number of the highest basic block.  */
-  streamer_write_uhwi (ob, last_basic_block_for_function (fn));
+  streamer_write_uhwi (ob, last_basic_block_for_fn (fn));
 
   FOR_ALL_BB_FN (bb, fn)
     {
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index e4a1371..3df4cbe 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -184,7 +184,7 @@ init_empty_tree_cfg_for_function (struct function *fn)
   init_flow (fn);
   profile_status_for_function (fn) = PROFILE_ABSENT;
   n_basic_blocks_for_fn (fn) = NUM_FIXED_BLOCKS;
-  last_basic_block_for_function (fn) = NUM_FIXED_BLOCKS;
+  last_basic_block_for_fn (fn) = NUM_FIXED_BLOCKS;
   vec_alloc (basic_block_info_for_fn (fn), initial_cfg_capacity);
   vec_safe_grow_cleared (basic_block_info_for_fn (fn),
 			 initial_cfg_capacity);
diff --git a/gcc/tree-sra.c b/gcc/tree-sra.c
index 0890613..9aa526f 100644
--- a/gcc/tree-sra.c
+++ b/gcc/tree-sra.c
@@ -3793,7 +3793,7 @@ propagate_dereference_distances (void)
 {
   basic_block bb;
 
-  auto_vec<basic_block> queue (last_basic_block_for_function (cfun));
+  auto_vec<basic_block> queue (last_basic_block_for_fn (cfun));
   queue.quick_push (ENTRY_BLOCK_PTR_FOR_FN (cfun));
   FOR_EACH_BB (bb)
     {
@@ -4970,7 +4970,7 @@ ipa_early_sra (void)
 
   bb_dereferences = XCNEWVEC (HOST_WIDE_INT,
 				 func_param_count
-				 * last_basic_block_for_function (cfun));
+				 * last_basic_block_for_fn (cfun));
   final_bbs = BITMAP_ALLOC (NULL);
 
   scan_function ();
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 05/13] Eliminate SET_BASIC_BLOCK macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (4 preceding siblings ...)
  2013-12-06 14:53                     ` [PATCH 02/13] Rename last_basic_block_for_function to last_basic_block_for_fn David Malcolm
@ 2013-12-06 14:53                     ` David Malcolm
  2013-12-06 14:53                     ` [PATCH 12/13] Eliminate FOR_EACH_BB_REVERSE macro David Malcolm
                                       ` (7 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 14:53 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (SET_BASIC_BLOCK): Eliminate macro.

	* cfg.c (compact_blocks): Replace uses of SET_BASIC_BLOCK
	with SET_BASIC_BLOCK_FOR_FN, making use of cfun explicit.
	(expunge_block): Likewise.
	* cfgrtl.c (create_basic_block_structure): Likewise.
	* df-core.c (df_compact_blocks, df_bb_replace): Likewise.
	* sel-sched.c (create_block_for_bookkeeping): Likewise.
	* tree-cfg.c (create_bb): Likewise.
---
 gcc/basic-block.h |  1 -
 gcc/cfg.c         | 10 +++++-----
 gcc/cfgrtl.c      |  2 +-
 gcc/df-core.c     |  8 ++++----
 gcc/sel-sched.c   |  4 ++--
 gcc/tree-cfg.c    |  2 +-
 6 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index da93c6f..f759e27 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -333,7 +333,6 @@ struct GTY(()) control_flow_graph {
 #define profile_status		(cfun->cfg->x_profile_status)
 
 #define BASIC_BLOCK(N)		((*basic_block_info)[(N)])
-#define SET_BASIC_BLOCK(N,BB)	((*basic_block_info)[(N)] = (BB))
 
 /* For iterating over basic blocks.  */
 #define FOR_BB_BETWEEN(BB, FROM, TO, DIR) \
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 786fe48..f386168 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -153,8 +153,8 @@ compact_blocks (void)
 {
   int i;
 
-  SET_BASIC_BLOCK (ENTRY_BLOCK, ENTRY_BLOCK_PTR_FOR_FN (cfun));
-  SET_BASIC_BLOCK (EXIT_BLOCK, EXIT_BLOCK_PTR_FOR_FN (cfun));
+  SET_BASIC_BLOCK_FOR_FN (cfun, ENTRY_BLOCK, ENTRY_BLOCK_PTR_FOR_FN (cfun));
+  SET_BASIC_BLOCK_FOR_FN (cfun, EXIT_BLOCK, EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   if (df)
     df_compact_blocks ();
@@ -165,14 +165,14 @@ compact_blocks (void)
       i = NUM_FIXED_BLOCKS;
       FOR_EACH_BB (bb)
 	{
-	  SET_BASIC_BLOCK (i, bb);
+	  SET_BASIC_BLOCK_FOR_FN (cfun, i, bb);
 	  bb->index = i;
 	  i++;
 	}
       gcc_assert (i == n_basic_blocks_for_fn (cfun));
 
       for (; i < last_basic_block; i++)
-	SET_BASIC_BLOCK (i, NULL);
+	SET_BASIC_BLOCK_FOR_FN (cfun, i, NULL);
     }
   last_basic_block = n_basic_blocks_for_fn (cfun);
 }
@@ -183,7 +183,7 @@ void
 expunge_block (basic_block b)
 {
   unlink_block (b);
-  SET_BASIC_BLOCK (b->index, NULL);
+  SET_BASIC_BLOCK_FOR_FN (cfun, b->index, NULL);
   n_basic_blocks_for_fn (cfun)--;
   /* We should be able to ggc_free here, but we are not.
      The dead SSA_NAMES are left pointing to dead statements that are pointing
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 63f44af..045d78b 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -331,7 +331,7 @@ create_basic_block_structure (rtx head, rtx end, rtx bb_note, basic_block after)
   bb->index = last_basic_block++;
   bb->flags = BB_NEW | BB_RTL;
   link_block (bb, after);
-  SET_BASIC_BLOCK (bb->index, bb);
+  SET_BASIC_BLOCK_FOR_FN (cfun, bb->index, bb);
   df_bb_refs_record (bb->index, false);
   update_bb_for_insn (bb);
   BB_SET_PARTITION (bb, BB_UNPARTITIONED);
diff --git a/gcc/df-core.c b/gcc/df-core.c
index 37876af..4fb92a9 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -1601,7 +1601,7 @@ df_compact_blocks (void)
   i = NUM_FIXED_BLOCKS;
   FOR_EACH_BB (bb)
     {
-      SET_BASIC_BLOCK (i, bb);
+      SET_BASIC_BLOCK_FOR_FN (cfun, i, bb);
       bb->index = i;
       i++;
     }
@@ -1609,7 +1609,7 @@ df_compact_blocks (void)
   gcc_assert (i == n_basic_blocks_for_fn (cfun));
 
   for (; i < last_basic_block; i++)
-    SET_BASIC_BLOCK (i, NULL);
+    SET_BASIC_BLOCK_FOR_FN (cfun, i, NULL);
 
 #ifdef DF_DEBUG_CFG
   if (!df_lr->solutions_dirty)
@@ -1645,10 +1645,10 @@ df_bb_replace (int old_index, basic_block new_block)
     }
 
   df_clear_bb_dirty (new_block);
-  SET_BASIC_BLOCK (old_index, new_block);
+  SET_BASIC_BLOCK_FOR_FN (cfun, old_index, new_block);
   new_block->index = old_index;
   df_set_bb_dirty (BASIC_BLOCK (old_index));
-  SET_BASIC_BLOCK (new_block_index, NULL);
+  SET_BASIC_BLOCK_FOR_FN (cfun, new_block_index, NULL);
 }
 
 
diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c
index 1e3fcf0..1195f7e 100644
--- a/gcc/sel-sched.c
+++ b/gcc/sel-sched.c
@@ -4663,8 +4663,8 @@ create_block_for_bookkeeping (edge e1, edge e2)
 	      new_bb->index = succ->index;
 	      succ->index = i;
 
-	      SET_BASIC_BLOCK (new_bb->index, new_bb);
-	      SET_BASIC_BLOCK (succ->index, succ);
+	      SET_BASIC_BLOCK_FOR_FN (cfun, new_bb->index, new_bb);
+	      SET_BASIC_BLOCK_FOR_FN (cfun, succ->index, succ);
 
 	      memcpy (&gbi, SEL_GLOBAL_BB_INFO (new_bb), sizeof (gbi));
 	      memcpy (SEL_GLOBAL_BB_INFO (new_bb), SEL_GLOBAL_BB_INFO (succ),
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 6c2cc16..2d7916b 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -610,7 +610,7 @@ create_bb (void *h, void *e, basic_block after)
     }
 
   /* Add the newly created block to the array.  */
-  SET_BASIC_BLOCK (last_basic_block, bb);
+  SET_BASIC_BLOCK_FOR_FN (cfun, last_basic_block, bb);
 
   n_basic_blocks_for_fn (cfun)++;
   last_basic_block++;
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 09/13] Eliminate profile_status macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (7 preceding siblings ...)
  2013-12-06 15:08                     ` [PATCH 11/13] Eliminate FOR_EACH_BB macro David Malcolm
@ 2013-12-06 15:08                     ` David Malcolm
  2013-12-06 15:08                     ` [PATCH 08/13] Eliminate label_to_block_map macro David Malcolm
                                       ` (4 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 15:08 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (profile_status): Eliminate macro.

	* cfgbuild.c (find_many_sub_basic_blocks): Eliminate use of
	profile_status macro in favor of profile_status_for_fn, making
	use of cfun explicit.
	* cfghooks.c (account_profile_record): Likewise.
	* cfgloopanal.c (single_likely_exit):
	* cfgrtl.c (rtl_verify_edges, rtl_account_profile_record): Likewise.
	* graphite.c (graphite_finalize):
	* internal-fn.c (ubsan_expand_si_overflow_addsub_check,
	ubsan_expand_si_overflow_neg_check,
	ubsan_expand_si_overflow_mul_check): Likewise.
	* ipa-split.c (consider_split, execute_split_functions):
	* loop-unroll.c (decide_peel_simple):
	* optabs.c (emit_cmp_and_jump_insn_1):
	* predict.c (maybe_hot_edge_p, probably_never_executed,
	predictable_edge_p, probability_reliable_p, gimple_predict_edge,
	tree_estimate_probability_driver, estimate_bb_frequencies,
	compute_function_frequency, rebuild_frequencies): Likewise.
	* profile.c (compute_branch_probabilities): Likewise.
	* tree-cfg.c (gimple_account_profile_record): Likewise.
	* tree-inline.c (optimize_inline_calls): Likewise.
---
 gcc/basic-block.h |  1 -
 gcc/cfgbuild.c    |  2 +-
 gcc/cfghooks.c    |  4 ++--
 gcc/cfgloopanal.c |  2 +-
 gcc/cfgrtl.c      |  6 +++---
 gcc/graphite.c    |  2 +-
 gcc/internal-fn.c |  6 +++---
 gcc/ipa-split.c   |  4 ++--
 gcc/loop-unroll.c |  2 +-
 gcc/optabs.c      |  2 +-
 gcc/predict.c     | 26 +++++++++++++-------------
 gcc/profile.c     |  4 ++--
 gcc/tree-cfg.c    |  4 ++--
 gcc/tree-inline.c |  3 ++-
 14 files changed, 34 insertions(+), 34 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 4ab8289..d000a43 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -328,7 +328,6 @@ struct GTY(()) control_flow_graph {
 
 /* Defines for textual backward source compatibility.  */
 #define last_basic_block	(cfun->cfg->x_last_basic_block)
-#define profile_status		(cfun->cfg->x_profile_status)
 
 /* For iterating over basic blocks.  */
 #define FOR_BB_BETWEEN(BB, FROM, TO, DIR) \
diff --git a/gcc/cfgbuild.c b/gcc/cfgbuild.c
index 08534d4..a0c2c66 100644
--- a/gcc/cfgbuild.c
+++ b/gcc/cfgbuild.c
@@ -618,7 +618,7 @@ find_many_sub_basic_blocks (sbitmap blocks)
 
   /* Update branch probabilities.  Expect only (un)conditional jumps
      to be created with only the forward edges.  */
-  if (profile_status != PROFILE_ABSENT)
+  if (profile_status_for_fn (cfun) != PROFILE_ABSENT)
     FOR_BB_BETWEEN (bb, min, max->next_bb, next_bb)
       {
 	edge e;
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index 0cd6af0..ab1c15f 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -1411,7 +1411,7 @@ account_profile_record (struct profile_record *record, int after_pass)
   FOR_ALL_BB (bb)
    {
       if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
-	  && profile_status != PROFILE_ABSENT)
+	  && profile_status_for_fn (cfun) != PROFILE_ABSENT)
 	{
 	  sum = 0;
 	  FOR_EACH_EDGE (e, ei, bb->succs)
@@ -1426,7 +1426,7 @@ account_profile_record (struct profile_record *record, int after_pass)
 	    record->num_mismatched_count_out[after_pass]++;
 	}
       if (bb != ENTRY_BLOCK_PTR_FOR_FN (cfun)
-	  && profile_status != PROFILE_ABSENT)
+	  && profile_status_for_fn (cfun) != PROFILE_ABSENT)
 	{
 	  sum = 0;
 	  FOR_EACH_EDGE (e, ei, bb->preds)
diff --git a/gcc/cfgloopanal.c b/gcc/cfgloopanal.c
index 0cee6c6..2260f4b 100644
--- a/gcc/cfgloopanal.c
+++ b/gcc/cfgloopanal.c
@@ -470,7 +470,7 @@ single_likely_exit (struct loop *loop)
 	 ruled out by this test.  The static branch prediction algorithm
          will not assign such a low probability to conditionals for usual
          reasons.  */
-      if (profile_status != PROFILE_ABSENT
+      if (profile_status_for_fn (cfun) != PROFILE_ABSENT
 	  && ex->probability < 5 && !ex->count)
 	continue;
       if (!found)
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 772d939..34fe4f3 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -2420,7 +2420,7 @@ rtl_verify_edges (void)
 	  && any_condjump_p (BB_END (bb)))
 	{
 	  if (XINT (note, 0) != BRANCH_EDGE (bb)->probability
-	      && profile_status != PROFILE_ABSENT)
+	      && profile_status_for_fn (cfun) != PROFILE_ABSENT)
 	    {
 	      error ("verify_flow_info: REG_BR_PROB does not match cfg %i %i",
 		     XINT (note, 0), BRANCH_EDGE (bb)->probability);
@@ -5011,10 +5011,10 @@ rtl_account_profile_record (basic_block bb, int after_pass,
       {
 	record->size[after_pass]
 	  += insn_rtx_cost (PATTERN (insn), false);
-	if (profile_status == PROFILE_READ)
+	if (profile_status_for_fn (cfun) == PROFILE_READ)
 	  record->time[after_pass]
 	    += insn_rtx_cost (PATTERN (insn), true) * bb->count;
-	else if (profile_status == PROFILE_GUESSED)
+	else if (profile_status_for_fn (cfun) == PROFILE_GUESSED)
 	  record->time[after_pass]
 	    += insn_rtx_cost (PATTERN (insn), true) * bb->frequency;
       }
diff --git a/gcc/graphite.c b/gcc/graphite.c
index e46710c..a573ea7 100644
--- a/gcc/graphite.c
+++ b/gcc/graphite.c
@@ -245,7 +245,7 @@ graphite_finalize (bool need_cfg_cleanup_p)
     {
       scev_reset ();
       cleanup_tree_cfg ();
-      profile_status = PROFILE_ABSENT;
+      profile_status_for_fn (cfun) = PROFILE_ABSENT;
       release_recorded_exits ();
       tree_estimate_probability ();
     }
diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
index fb1e578..8c54d98 100644
--- a/gcc/internal-fn.c
+++ b/gcc/internal-fn.c
@@ -194,7 +194,7 @@ ubsan_expand_si_overflow_addsub_check (tree_code code, gimple stmt)
       if (maybe_expand_insn (icode, 4, ops))
 	{
 	  last = get_last_insn ();
-	  if (profile_status != PROFILE_ABSENT
+	  if (profile_status_for_fn (cfun) != PROFILE_ABSENT
 	      && JUMP_P (last)
 	      && any_condjump_p (last)
 	      && !find_reg_note (last, REG_BR_PROB, 0))
@@ -285,7 +285,7 @@ ubsan_expand_si_overflow_neg_check (gimple stmt)
       if (maybe_expand_insn (icode, 3, ops))
 	{
 	  last = get_last_insn ();
-	  if (profile_status != PROFILE_ABSENT
+	  if (profile_status_for_fn (cfun) != PROFILE_ABSENT
 	      && JUMP_P (last)
 	      && any_condjump_p (last)
 	      && !find_reg_note (last, REG_BR_PROB, 0))
@@ -364,7 +364,7 @@ ubsan_expand_si_overflow_mul_check (gimple stmt)
       if (maybe_expand_insn (icode, 4, ops))
 	{
 	  last = get_last_insn ();
-	  if (profile_status != PROFILE_ABSENT
+	  if (profile_status_for_fn (cfun) != PROFILE_ABSENT
 	      && JUMP_P (last)
 	      && any_condjump_p (last)
 	      && !find_reg_note (last, REG_BR_PROB, 0))
diff --git a/gcc/ipa-split.c b/gcc/ipa-split.c
index eca86da..f8fa0ee 100644
--- a/gcc/ipa-split.c
+++ b/gcc/ipa-split.c
@@ -411,7 +411,7 @@ consider_split (struct split_point *current, bitmap non_ssa_vars,
 	 a loop, enable splitting since inlining code skipping the loop
 	 is likely noticeable win.  */
       if (back_edge
-	  && profile_status != PROFILE_READ
+	  && profile_status_for_fn (cfun) != PROFILE_READ
 	  && incoming_freq < ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency)
 	{
 	  if (dump_file && (dump_flags & TDF_DETAILS))
@@ -1585,7 +1585,7 @@ execute_split_functions (void)
 
   /* We enforce splitting after loop headers when profile info is not
      available.  */
-  if (profile_status != PROFILE_READ)
+  if (profile_status_for_fn (cfun) != PROFILE_READ)
     mark_dfs_back_edges ();
 
   /* Initialize bitmap to track forbidden calls.  */
diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
index 9910b4e..d1c7b9c 100644
--- a/gcc/loop-unroll.c
+++ b/gcc/loop-unroll.c
@@ -1371,7 +1371,7 @@ decide_peel_simple (struct loop *loop, int flags)
      also branch from branch prediction POV (and probably better reason
      to not unroll/peel).  */
   if (num_loop_branches (loop) > 1
-      && profile_status != PROFILE_READ)
+      && profile_status_for_fn (cfun) != PROFILE_READ)
     {
       if (dump_file)
 	fprintf (dump_file, ";; Not peeling, contains branches\n");
diff --git a/gcc/optabs.c b/gcc/optabs.c
index e035af1..5172bd4 100644
--- a/gcc/optabs.c
+++ b/gcc/optabs.c
@@ -4286,7 +4286,7 @@ emit_cmp_and_jump_insn_1 (rtx test, enum machine_mode mode, rtx label, int prob)
   insn = emit_jump_insn (GEN_FCN (icode) (test, XEXP (test, 0),
                                           XEXP (test, 1), label));
   if (prob != -1
-      && profile_status != PROFILE_ABSENT
+      && profile_status_for_fn (cfun) != PROFILE_ABSENT
       && insn
       && JUMP_P (insn)
       && any_condjump_p (insn)
diff --git a/gcc/predict.c b/gcc/predict.c
index 1dec4dc..6bb1b2c 100644
--- a/gcc/predict.c
+++ b/gcc/predict.c
@@ -224,7 +224,7 @@ cgraph_maybe_hot_edge_p (struct cgraph_edge *edge)
 bool
 maybe_hot_edge_p (edge e)
 {
-  if (profile_status == PROFILE_READ)
+  if (profile_status_for_fn (cfun) == PROFILE_READ)
     return maybe_hot_count_p (cfun, e->count);
   return maybe_hot_frequency_p (cfun, EDGE_FREQUENCY (e));
 }
@@ -239,7 +239,7 @@ probably_never_executed (struct function *fun,
                          gcov_type count, int frequency)
 {
   gcc_checking_assert (fun);
-  if (profile_status_for_fn (fun) == PROFILE_READ)
+  if (profile_status_for_fn (cfun) == PROFILE_READ)
     {
       int unlikely_count_fraction = PARAM_VALUE (UNLIKELY_BB_COUNT_FRACTION);
       if (count * unlikely_count_fraction >= profile_info->runs)
@@ -438,7 +438,7 @@ optimize_loop_nest_for_size_p (struct loop *loop)
 bool
 predictable_edge_p (edge e)
 {
-  if (profile_status == PROFILE_ABSENT)
+  if (profile_status_for_fn (cfun) == PROFILE_ABSENT)
     return false;
   if ((e->probability
        <= PARAM_VALUE (PARAM_PREDICTABLE_BRANCH_OUTCOME) * REG_BR_PROB_BASE / 100)
@@ -539,8 +539,8 @@ gimple_predicted_by_p (const_basic_block bb, enum br_predictor predictor)
 static bool
 probability_reliable_p (int prob)
 {
-  return (profile_status == PROFILE_READ
-	  || (profile_status == PROFILE_GUESSED
+  return (profile_status_for_fn (cfun) == PROFILE_READ
+	  || (profile_status_for_fn (cfun) == PROFILE_GUESSED
 	      && (prob <= HITRATE (1) || prob >= HITRATE (99))));
 }
 
@@ -610,7 +610,7 @@ rtl_predict_edge (edge e, enum br_predictor predictor, int probability)
 void
 gimple_predict_edge (edge e, enum br_predictor predictor, int probability)
 {
-  gcc_assert (profile_status != PROFILE_GUESSED);
+  gcc_assert (profile_status_for_fn (cfun) != PROFILE_GUESSED);
   if ((e->src != ENTRY_BLOCK_PTR_FOR_FN (cfun) && EDGE_COUNT (e->src->succs) >
        1)
       && flag_guess_branch_prob && optimize)
@@ -2443,8 +2443,8 @@ tree_estimate_probability_driver (void)
   loop_optimizer_finalize ();
   if (dump_file && (dump_flags & TDF_DETAILS))
     gimple_dump_cfg (dump_file, dump_flags);
-  if (profile_status == PROFILE_ABSENT)
-    profile_status = PROFILE_GUESSED;
+  if (profile_status_for_fn (cfun) == PROFILE_ABSENT)
+    profile_status_for_fn (cfun) = PROFILE_GUESSED;
   return 0;
 }
 \f
@@ -2954,7 +2954,7 @@ estimate_bb_frequencies (bool force)
   basic_block bb;
   sreal freq_max;
 
-  if (force || profile_status != PROFILE_READ || !counts_to_freqs ())
+  if (force || profile_status_for_fn (cfun) != PROFILE_READ || !counts_to_freqs ())
     {
       static int real_values_initialized = 0;
 
@@ -3030,7 +3030,7 @@ compute_function_frequency (void)
   if (DECL_STATIC_DESTRUCTOR (current_function_decl))
     node->only_called_at_exit = true;
 
-  if (profile_status != PROFILE_READ)
+  if (profile_status_for_fn (cfun) != PROFILE_READ)
     {
       int flags = flags_from_decl_or_type (current_function_decl);
       if (lookup_attribute ("cold", DECL_ATTRIBUTES (current_function_decl))
@@ -3189,8 +3189,8 @@ rebuild_frequencies (void)
   FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun), NULL, next_bb)
     count_max = MAX (bb->count, count_max);
 
-  if (profile_status == PROFILE_GUESSED
-      || (profile_status == PROFILE_READ && count_max < REG_BR_PROB_BASE/10))
+  if (profile_status_for_fn (cfun) == PROFILE_GUESSED
+      || (profile_status_for_fn (cfun) == PROFILE_READ && count_max < REG_BR_PROB_BASE/10))
     {
       loop_optimizer_init (0);
       add_noreturn_fake_exit_edges ();
@@ -3200,7 +3200,7 @@ rebuild_frequencies (void)
       remove_fake_exit_edges ();
       loop_optimizer_finalize ();
     }
-  else if (profile_status == PROFILE_READ)
+  else if (profile_status_for_fn (cfun) == PROFILE_READ)
     counts_to_freqs ();
   else
     gcc_unreachable ();
diff --git a/gcc/profile.c b/gcc/profile.c
index 9aec3cb..24c16aa 100644
--- a/gcc/profile.c
+++ b/gcc/profile.c
@@ -797,7 +797,7 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
 	 give all abnormals frequency of 0, otherwise distribute the
 	 frequency over abnormals (this is the case of noreturn
 	 calls).  */
-      else if (profile_status == PROFILE_ABSENT)
+      else if (profile_status_for_fn (cfun) == PROFILE_ABSENT)
 	{
 	  int total = 0;
 
@@ -825,7 +825,7 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
 	}
     }
   counts_to_freqs ();
-  profile_status = PROFILE_READ;
+  profile_status_for_fn (cfun) = PROFILE_READ;
   compute_function_frequency ();
 
   if (dump_file)
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index f384b04..57d6487 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -7875,11 +7875,11 @@ gimple_account_profile_record (basic_block bb, int after_pass,
     {
       record->size[after_pass]
 	+= estimate_num_insns (gsi_stmt (i), &eni_size_weights);
-      if (profile_status == PROFILE_READ)
+      if (profile_status_for_fn (cfun) == PROFILE_READ)
 	record->time[after_pass]
 	  += estimate_num_insns (gsi_stmt (i),
 				 &eni_time_weights) * bb->count;
-      else if (profile_status == PROFILE_GUESSED)
+      else if (profile_status_for_fn (cfun) == PROFILE_GUESSED)
 	record->time[after_pass]
 	  += estimate_num_insns (gsi_stmt (i),
 				 &eni_time_weights) * bb->frequency;
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index 1d1bc1e..fd7eedb 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -4612,7 +4612,8 @@ optimize_inline_calls (tree fn)
 	  | TODO_cleanup_cfg
 	  | (gimple_in_ssa_p (cfun) ? TODO_remove_unused_locals : 0)
 	  | (gimple_in_ssa_p (cfun) ? TODO_update_address_taken : 0)
-	  | (profile_status != PROFILE_ABSENT ? TODO_rebuild_frequencies : 0));
+	  | (profile_status_for_fn (cfun) != PROFILE_ABSENT
+	     ? TODO_rebuild_frequencies : 0));
 }
 
 /* Passed to walk_tree.  Copies the node pointed to, if appropriate.  */
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 08/13] Eliminate label_to_block_map macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (8 preceding siblings ...)
  2013-12-06 15:08                     ` [PATCH 09/13] Eliminate profile_status macro David Malcolm
@ 2013-12-06 15:08                     ` David Malcolm
  2013-12-06 15:09                     ` [PATCH 10/13] Eliminate last_basic_block macro David Malcolm
                                       ` (3 subsequent siblings)
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 15:08 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (label_to_block_map): Eliminate macro.

	* gimple.c (gimple_set_bb): Replace uses of label_to_block_map
	with uses of label_to_block_map_for_fn, making uses of cfun be
	explicit.
	* tree-cfg.c (delete_tree_cfg_annotations): Likewise.
	(verify_gimple_label): Likewise.
---
 gcc/basic-block.h | 1 -
 gcc/gimple.c      | 8 +++++---
 gcc/tree-cfg.c    | 5 +++--
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 69689f3..4ab8289 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -328,7 +328,6 @@ struct GTY(()) control_flow_graph {
 
 /* Defines for textual backward source compatibility.  */
 #define last_basic_block	(cfun->cfg->x_last_basic_block)
-#define label_to_block_map	(cfun->cfg->x_label_to_block_map)
 #define profile_status		(cfun->cfg->x_profile_status)
 
 /* For iterating over basic blocks.  */
diff --git a/gcc/gimple.c b/gcc/gimple.c
index f11362a..077dca5 100644
--- a/gcc/gimple.c
+++ b/gcc/gimple.c
@@ -1475,17 +1475,19 @@ gimple_set_bb (gimple stmt, basic_block bb)
       uid = LABEL_DECL_UID (t);
       if (uid == -1)
 	{
-	  unsigned old_len = vec_safe_length (label_to_block_map);
+	  unsigned old_len =
+	    vec_safe_length (label_to_block_map_for_fn (cfun));
 	  LABEL_DECL_UID (t) = uid = cfun->cfg->last_label_uid++;
 	  if (old_len <= (unsigned) uid)
 	    {
 	      unsigned new_len = 3 * uid / 2 + 1;
 
-	      vec_safe_grow_cleared (label_to_block_map, new_len);
+	      vec_safe_grow_cleared (label_to_block_map_for_fn (cfun),
+				     new_len);
 	    }
 	}
 
-      (*label_to_block_map)[uid] = bb;
+      (*label_to_block_map_for_fn (cfun))[uid] = bb;
     }
 }
 
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 9558546..f384b04 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -2379,7 +2379,7 @@ stmt_ends_bb_p (gimple t)
 void
 delete_tree_cfg_annotations (void)
 {
-  vec_free (label_to_block_map);
+  vec_free (label_to_block_map_for_fn (cfun));
 }
 
 
@@ -4281,7 +4281,8 @@ verify_gimple_label (gimple stmt)
 
   uid = LABEL_DECL_UID (decl);
   if (cfun->cfg
-      && (uid == -1 || (*label_to_block_map)[uid] != gimple_bb (stmt)))
+      && (uid == -1
+	  || (*label_to_block_map_for_fn (cfun))[uid] != gimple_bb (stmt)))
     {
       error ("incorrect entry in label_to_block_map");
       err |= true;
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 11/13] Eliminate FOR_EACH_BB macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (6 preceding siblings ...)
  2013-12-06 14:53                     ` [PATCH 12/13] Eliminate FOR_EACH_BB_REVERSE macro David Malcolm
@ 2013-12-06 15:08                     ` David Malcolm
  2013-12-07  7:13                       ` Oleg Endo
  2013-12-06 15:08                     ` [PATCH 09/13] Eliminate profile_status macro David Malcolm
                                       ` (5 subsequent siblings)
  13 siblings, 1 reply; 42+ messages in thread
From: David Malcolm @ 2013-12-06 15:08 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (FOR_EACH_BB): Eliminate macro.

	* asan.c (transform_statements, execute_sanopt): Eliminate
	use of FOR_EACH_BB in favor of FOR_EACH_BB_FN, to make use of cfun
	explicit.
	* auto-inc-dec.c (rest_of_handle_auto_inc_dec): Likewise.
	* bb-reorder.c (find_rarely_executed_basic_blocks_and_crossing_edges,
	set_edge_can_fallthru_flag, fix_up_fall_thru_edges,
	fix_crossing_unconditional_branches, add_reg_crossing_jump_notes,
	insert_section_boundary_note, rest_of_handle_reorder_blocks,
	duplicate_computed_gotos): Likewise.
	* cfg.c (clear_edges, compact_blocks, brief_dump_cfg): Likewise.
	* cfganal.c (find_unreachable_blocks, add_noreturn_fake_exit_edges,
	compute_dominance_frontiers_1, single_pred_before_succ_order): Likewise.
	* cfgbuild.c (find_many_sub_basic_blocks): Likewise.
	* cfgcleanup.c (try_optimize_cfg, delete_dead_jumptables): Likewise.
	* cfgexpand.c (add_scope_conflicts, discover_nonconstant_array_refs):
	Likewise.
	* cfgloop.c (flow_loops_cfg_dump, get_loop_body, record_loop_exits,
	verify_loop_structure): Likewise.
	* cfgloopanal.c (mark_loop_exit_edges): Likewise.
	* cfgrtl.c (compute_bb_for_insn, find_partition_fixes,
	verify_hot_cold_block_grouping, purge_all_dead_edges,
	fixup_abnormal_edges, record_effective_endpoints,
	outof_cfg_layout_mode, fixup_reorder_chain, force_one_exit_fallthru,
	break_superblocks): Likewise.
	* cgraphbuild.c (build_cgraph_edges, rebuild_cgraph_edges,
	cgraph_rebuild_references): Likewise.
	* combine-stack-adj.c (combine_stack_adjustments): Likewise.
	* combine.c (delete_noop_moves, create_log_links,
	combine_instructions): Likewise.
	* config/arm/arm.c (thumb1_reorg, thumb2_reorg): Likewise.
	* config/bfin/bfin.c (bfin_gen_bundles, reorder_var_tracking_notes):
	Likewise.
	* config/c6x/c6x.c (c6x_gen_bundles, conditionalize_after_sched,
	c6x_reorg): Likewise.
	* config/epiphany/resolve-sw-modes.c (resolve_sw_modes): Likewise.
	* config/frv/frv.c (frv_optimize_membar): Likewise.
	* config/i386/i386.c (ix86_finalize_stack_realign_flags): Likewise.
	* config/ia64/ia64.c (ia64_reorg): Likewise.
	* config/mips/mips.c (mips_annotate_pic_calls): Likewise.
	* config/picochip/picochip.c (reorder_var_tracking_notes): Likewise.
	* config/rs6000/rs6000.c (rs6000_alloc_sdmode_stack_slot): Likewise.
	* config/s390/s390.c (s390_regs_ever_clobbered): Likewise.
	* config/spu/spu.c (spu_machine_dependent_reorg): Likewise.
	* config/tilegx/tilegx.c (tilegx_gen_bundles,
	reorder_var_tracking_notes): Likewise.
	* config/tilepro/tilepro.c (tilepro_gen_bundles,
	reorder_var_tracking_notes): Likewise.
	* coverage.c (coverage_compute_cfg_checksum): Likewise.
	* cprop.c (compute_hash_table_work, compute_cprop_data,
	local_cprop_pass, find_implicit_sets): Likewise.
	* cse.c (cse_condition_code_reg): Likewise.
	* dce.c (prescan_insns_for_dce): Likewise.
	* df-core.c (df_compact_blocks): Likewise.
	* df-problems.c (df_word_lr_alloc): Likewise.
	* df-scan.c (df_scan_start_dump, df_scan_blocks, df_insn_rescan_all,
	df_update_entry_exit_and_calls): Likewise.
	* dominance.c (calculate_dominance_info, verify_dominators,
	debug_dominance_info): Likewise.
	* dse.c (dse_step5_nospill): Likewise.
	* except.c (finish_eh_generation): Likewise.
	* final.c (compute_alignments): Likewise.
	* function.c (thread_prologue_and_epilogue_insns,
	rest_of_match_asm_constraints): Likewise.
	* gcse.c (compute_hash_table_work, prune_expressions,
	compute_pre_data, compute_code_hoist_vbeinout, hoist_code,
	calculate_bb_reg_pressure, compute_ld_motion_mems): Likewise.
	* gimple-iterator.c (gsi_commit_edge_inserts): Likewise.
	* gimple-ssa-isolate-paths.c (find_implicit_erroneous_behaviour,
	find_explicit_erroneous_behaviour): Likewise.
	* graphite-sese-to-poly.c (rewrite_reductions_out_of_ssa,
	rewrite_cross_bb_scalar_deps_out_of_ssa): Likewise.
	* haifa-sched.c (haifa_sched_init): Likewise.
	* hw-doloop.c (discover_loops, set_bb_indices, reorder_loops):
	Likewise.
	* ifcvt.c (if_convert): Likewise.
	* init-regs.c (initialize_uninitialized_regs): Likewise.
	* ipa-prop.c (ipcp_transform_function): Likewise.
	* ipa-pure-const.c (analyze_function): Likewise.
	* ipa-split.c (find_split_points, execute_split_functions): Likewise.
	* ira-build.c (form_loop_tree): Likewise.
	* ira-costs.c (find_costs_and_classes): Likewise.
	* ira-emit.c (emit_moves, add_ranges_and_copies, ira_emit): Likewise.
	* ira.c (decrease_live_ranges_number, compute_regs_asm_clobbered,
	mark_elimination, update_equiv_regs, find_moveable_pseudos,
	split_live_ranges_for_shrink_wrap, allocate_initial_values): Likewise.
	* jump.c (mark_all_labels): Likewise.
	* lcm.c (compute_laterin, compute_insert_delete, compute_available,
	compute_nearerout, compute_rev_insert_delete): Likewise.
	* loop-init.c (fix_loop_structure): Likewise.
	* loop-invariant.c (calculate_loop_reg_pressure): Likewise.
	* lower-subreg.c (decompose_multiword_subregs,
	decompose_multiword_subregs): Likewise.
	* lra-assigns.c (assign_by_spills): Likewise.
	* lra-coalesce.c (lra_coalesce): Likewise.
	* lra-constraints.c (lra_inheritance, remove_inheritance_pseudos):
	Likewise.
	* lra-eliminations.c (lra_init_elimination): Likewise.
	* lra-spills.c (assign_spill_hard_regs, spill_pseudos,
	lra_final_code_change): Likewise.
	* lra.c (remove_scratches, check_rtl, has_nonexceptional_receiver,
	update_inc_notes): Likewise.
	* mcf.c (adjust_cfg_counts): Likewise.
	* mode-switching.c (optimize_mode_switching): Likewise.
	* modulo-sched.c (rest_of_handle_sms): Likewise.
	* omp-low.c (optimize_omp_library_calls, expand_omp_taskreg,
	expand_omp_target): Likewise.
	* postreload-gcse.c (alloc_mem, compute_hash_table): Likewise.
	* postreload.c (reload_cse_regs_1): Likewise.
	* predict.c (strip_predict_hints, tree_bb_level_predictions,
	tree_estimate_probability, expensive_function_p,
	estimate_bb_frequencies, compute_function_frequency): Likewise.
	* profile.c (is_inconsistent, compute_branch_probabilities,
	branch_prob): Likewise.
	* ree.c (find_removable_extensions): Likewise.
	* reg-stack.c (compensate_edges, convert_regs, reg_to_stack): Likewise.
	* regcprop.c (copyprop_hardreg_forward): Likewise.
	* reginfo.c (init_subregs_of_mode): Likewise.
	* regrename.c (regrename_analyze): Likewise.
	* regstat.c (regstat_compute_ri, regstat_compute_calls_crossed):
	Likewise.
	* reload1.c (has_nonexceptional_receiver, reload,
	calculate_elim_costs_all_insns): Likewise.
	* resource.c (init_resource_info, free_resource_info): Likewise.
	* sched-ebb.c (schedule_ebbs): Likewise.
	* sched-rgn.c (is_cfg_nonregular, find_single_block_region,
	haifa_find_rgns, sched_rgn_local_init): Likewise.
	* sel-sched-dump.c (sel_dump_cfg_2): Likewise.
	* sel-sched-ir.c (init_lv_sets, free_lv_sets,
	make_regions_from_the_rest): Likewise.
	* sese.c (build_sese_loop_nests, sese_build_liveouts): Likewise.
	* stack-ptr-mod.c (notice_stack_pointer_modification): Likewise.
	* store-motion.c (compute_store_table, build_store_vectors,
	one_store_motion_pass): Likewise.
	* tracer.c (tail_duplicate): Likewise.
	* trans-mem.c (compute_transaction_bits): Likewise.
	* tree-call-cdce.c (tree_call_cdce): Likewise.
	* tree-cfg.c (replace_loop_annotate, factor_computed_gotos,
	fold_cond_expr_cond, make_edges, assign_discriminators,
	make_abnormal_goto_edges, cleanup_dead_labels, group_case_labels,
	dump_cfg_stats, gimple_verify_flow_info, print_loop,
	execute_fixup_cfg): Likewise.
	* tree-cfgcleanup.c (cleanup_tree_cfg_1, merge_phi_nodes): Likewise.
	* tree-complex.c (init_dont_simulate_again, tree_lower_complex):
	Likewise.
	* tree-dfa.c (collect_dfa_stats, dump_enumerated_decls): Likewise.
	* tree-eh.c (execute_lower_resx, execute_lower_eh_dispatch,
	mark_reachable_handlers): Likewise.
	* tree-emutls.c (lower_emutls_function_body): Likewise.
	* tree-if-conv.c (main_tree_if_conversion): Likewise.
	* tree-inline.c (optimize_inline_calls): Likewise.
	* tree-into-ssa.c (rewrite_into_ssa, update_ssa): Likewise.
	* tree-nrv.c (tree_nrv, execute_return_slot_opt): Likewise.
	* tree-object-size.c (compute_object_sizes): Likewise.
	* tree-outof-ssa.c (eliminate_useless_phis, rewrite_trees,
	insert_backedge_copies, tree_profiling): Likewise.
	* tree-scalar-evolution.c (scev_const_prop): Likewise.
	* tree-sra.c (scan_function, sra_modify_function_body,
	propagate_dereference_distances, ipa_sra_modify_function_body,
	convert_callers): Likewise.
	* tree-ssa-ccp.c (ccp_initialize, execute_fold_all_builtins): Likewise.
	* tree-ssa-coalesce.c (build_ssa_conflict_graph): Likewise.
	create_outofssa_var_map, coalesce_partitions): Likewise.
	* tree-ssa-copy.c (init_copy_prop): Likewise.
	* tree-ssa-copyrename.c (rename_ssa_copies): Likewise.
	* tree-ssa-dce.c (find_obviously_necessary_stmts,
	eliminate_unnecessary_stmts): Likewise.
	* tree-ssa-dom.c (free_all_edge_infos, tree_ssa_dominator_optimize):
	Likewise.
	* tree-ssa-forwprop.c (ssa_forward_propagate_and_combine): Likewise.
	* tree-ssa-live.c (clear_unused_block_pointer, remove_unused_locals,
	new_tree_live_info, calculate_live_on_exit, dump_live_info,
	analyze_memory_references, fill_always_executed_in,
	tree_ssa_lim_finalize): Likewise.
	* tree-ssa-loop-manip.c (find_uses_to_rename, verify_loop_closed_ssa):
	Likewise.
	* tree-ssa-math-opts.c (execute_cse_reciprocals, execute_cse_sincos,
	execute_optimize_bswap, execute_optimize_widening_mul): Likewise.
	* tree-ssa-propagate.c (substitute_and_fold): Likewise.
	* tree-ssa-structalias.c (compute_points_to_sets): Likewise.
	* tree-ssa-tail-merge.c (find_same_succ, reset_cluster_vectors):
	Likewise.
	* tree-ssa-ter.c (find_replaceable_exprs): Likewise.
	* tree-ssa-threadupdate.c (thread_through_all_blocks): Likewise.
	* tree-ssa-uncprop.c (associate_equivalences_with_edges,
	tree_ssa_uncprop): Likewise.
	* tree-ssa-uninit.c (warn_uninitialized_vars,
	execute_late_warn_uninitialized): Likewise.
	* tree-ssa.c (verify_ssa, execute_update_addresses_taken): Likewise.
	* tree-stdarg.c (check_all_va_list_escapes, execute_optimize_stdarg):
	Likewise.
	* tree-switch-conversion.c (do_switchconv): Likewise.
	* tree-vect-generic.c (expand_vector_operations): Likewise.
	* tree-vectorizer.c (adjust_simduid_builtins, note_simd_array_uses,
	execute_vect_slp): Likewise.
	* tree-vrp.c (check_all_array_refs, remove_range_assertions,
	vrp_initialize, identify_jump_threads, instrument_memory_accesses):
	Likewise.
	* ubsan.c (ubsan_pass): Likewise.
	* value-prof.c (verify_histograms, gimple_value_profile_transformations,
	gimple_find_values_to_profile): Likewise.
	* var-tracking.c (vt_find_locations, dump_dataflow_sets, vt_emit_notes,
	vt_initialize, delete_debug_insns, vt_finalize): Likewise.

gcc/testsuite/
	* g++.dg/plugin/selfassign.c (execute_warn_self_assign): Eliminate
	use of FOR_EACH_BB in favor of FOR_EACH_BB_FN, to make use of cfun
	explicit.
	* gcc.dg/plugin/selfassign.c (execute_warn_self_assign): Likewise.
---
 gcc/asan.c                               |  4 ++--
 gcc/auto-inc-dec.c                       |  2 +-
 gcc/basic-block.h                        |  2 --
 gcc/bb-reorder.c                         | 22 +++++++++++-----------
 gcc/cfg.c                                |  6 +++---
 gcc/cfganal.c                            |  8 ++++----
 gcc/cfgbuild.c                           |  8 ++++----
 gcc/cfgcleanup.c                         |  4 ++--
 gcc/cfgexpand.c                          |  4 ++--
 gcc/cfgloop.c                            | 14 +++++++-------
 gcc/cfgloopanal.c                        |  2 +-
 gcc/cfgrtl.c                             | 22 +++++++++++-----------
 gcc/cgraphbuild.c                        |  6 +++---
 gcc/combine-stack-adj.c                  |  2 +-
 gcc/combine.c                            |  8 ++++----
 gcc/config/arm/arm.c                     |  4 ++--
 gcc/config/bfin/bfin.c                   |  4 ++--
 gcc/config/c6x/c6x.c                     |  6 +++---
 gcc/config/epiphany/resolve-sw-modes.c   |  2 +-
 gcc/config/frv/frv.c                     |  4 ++--
 gcc/config/i386/i386.c                   |  2 +-
 gcc/config/ia64/ia64.c                   |  2 +-
 gcc/config/mips/mips.c                   |  2 +-
 gcc/config/picochip/picochip.c           |  2 +-
 gcc/config/rs6000/rs6000.c               |  2 +-
 gcc/config/s390/s390.c                   |  2 +-
 gcc/config/spu/spu.c                     |  2 +-
 gcc/config/tilegx/tilegx.c               |  4 ++--
 gcc/config/tilepro/tilepro.c             |  4 ++--
 gcc/coverage.c                           |  2 +-
 gcc/cprop.c                              |  8 ++++----
 gcc/cse.c                                |  2 +-
 gcc/dce.c                                |  2 +-
 gcc/df-core.c                            |  8 ++++----
 gcc/df-problems.c                        |  2 +-
 gcc/df-scan.c                            |  8 ++++----
 gcc/dominance.c                          |  6 +++---
 gcc/dse.c                                |  2 +-
 gcc/except.c                             |  2 +-
 gcc/final.c                              |  4 ++--
 gcc/function.c                           | 12 ++++++------
 gcc/gcse.c                               | 16 ++++++++--------
 gcc/gimple-iterator.c                    |  2 +-
 gcc/gimple-ssa-isolate-paths.c           |  4 ++--
 gcc/graphite-sese-to-poly.c              |  4 ++--
 gcc/haifa-sched.c                        |  2 +-
 gcc/hw-doloop.c                          |  6 +++---
 gcc/ifcvt.c                              |  2 +-
 gcc/init-regs.c                          |  2 +-
 gcc/ipa-prop.c                           |  2 +-
 gcc/ipa-pure-const.c                     |  2 +-
 gcc/ipa-split.c                          |  4 ++--
 gcc/ira-build.c                          |  2 +-
 gcc/ira-costs.c                          |  2 +-
 gcc/ira-emit.c                           | 14 +++++++-------
 gcc/ira.c                                | 22 +++++++++++-----------
 gcc/jump.c                               |  2 +-
 gcc/lcm.c                                | 10 +++++-----
 gcc/loop-init.c                          |  4 ++--
 gcc/loop-invariant.c                     |  2 +-
 gcc/lower-subreg.c                       |  4 ++--
 gcc/lra-assigns.c                        |  2 +-
 gcc/lra-coalesce.c                       |  4 ++--
 gcc/lra-constraints.c                    |  4 ++--
 gcc/lra-eliminations.c                   |  2 +-
 gcc/lra-spills.c                         |  6 +++---
 gcc/lra.c                                |  8 ++++----
 gcc/mcf.c                                |  2 +-
 gcc/mode-switching.c                     |  6 +++---
 gcc/modulo-sched.c                       |  2 +-
 gcc/omp-low.c                            |  6 +++---
 gcc/postreload-gcse.c                    |  4 ++--
 gcc/postreload.c                         |  2 +-
 gcc/predict.c                            | 14 +++++++-------
 gcc/profile.c                            |  8 ++++----
 gcc/ree.c                                |  2 +-
 gcc/reg-stack.c                          |  6 +++---
 gcc/regcprop.c                           |  4 ++--
 gcc/reginfo.c                            |  2 +-
 gcc/regrename.c                          |  8 ++++----
 gcc/regstat.c                            |  4 ++--
 gcc/reload1.c                            |  8 ++++----
 gcc/resource.c                           |  4 ++--
 gcc/sched-ebb.c                          |  2 +-
 gcc/sched-rgn.c                          | 26 +++++++++++++-------------
 gcc/sel-sched-dump.c                     |  2 +-
 gcc/sel-sched-ir.c                       | 10 +++++-----
 gcc/sese.c                               |  6 +++---
 gcc/stack-ptr-mod.c                      |  2 +-
 gcc/store-motion.c                       |  6 +++---
 gcc/testsuite/g++.dg/plugin/selfassign.c |  2 +-
 gcc/testsuite/gcc.dg/plugin/selfassign.c |  2 +-
 gcc/tracer.c                             |  2 +-
 gcc/trans-mem.c                          |  2 +-
 gcc/tree-call-cdce.c                     |  2 +-
 gcc/tree-cfg.c                           | 28 ++++++++++++++--------------
 gcc/tree-cfgcleanup.c                    |  4 ++--
 gcc/tree-complex.c                       |  4 ++--
 gcc/tree-dfa.c                           |  4 ++--
 gcc/tree-eh.c                            |  6 +++---
 gcc/tree-emutls.c                        |  2 +-
 gcc/tree-if-conv.c                       |  2 +-
 gcc/tree-inline.c                        |  2 +-
 gcc/tree-into-ssa.c                      |  8 ++++----
 gcc/tree-nrv.c                           |  6 +++---
 gcc/tree-object-size.c                   |  2 +-
 gcc/tree-outof-ssa.c                     |  6 +++---
 gcc/tree-profile.c                       |  2 +-
 gcc/tree-scalar-evolution.c              |  2 +-
 gcc/tree-sra.c                           | 10 +++++-----
 gcc/tree-ssa-ccp.c                       |  6 +++---
 gcc/tree-ssa-coalesce.c                  |  6 +++---
 gcc/tree-ssa-copy.c                      |  2 +-
 gcc/tree-ssa-copyrename.c                |  4 ++--
 gcc/tree-ssa-dce.c                       |  6 +++---
 gcc/tree-ssa-dom.c                       |  4 ++--
 gcc/tree-ssa-forwprop.c                  |  2 +-
 gcc/tree-ssa-live.c                      | 18 +++++++++---------
 gcc/tree-ssa-loop-im.c                   |  6 +++---
 gcc/tree-ssa-loop-manip.c                |  4 ++--
 gcc/tree-ssa-math-opts.c                 | 10 +++++-----
 gcc/tree-ssa-propagate.c                 |  2 +-
 gcc/tree-ssa-structalias.c               |  4 ++--
 gcc/tree-ssa-tail-merge.c                |  4 ++--
 gcc/tree-ssa-ter.c                       |  2 +-
 gcc/tree-ssa-threadupdate.c              |  2 +-
 gcc/tree-ssa-uncprop.c                   |  4 ++--
 gcc/tree-ssa-uninit.c                    |  4 ++--
 gcc/tree-ssa.c                           |  6 +++---
 gcc/tree-stdarg.c                        |  6 +++---
 gcc/tree-switch-conversion.c             |  2 +-
 gcc/tree-vect-generic.c                  |  2 +-
 gcc/tree-vectorizer.c                    |  6 +++---
 gcc/tree-vrp.c                           |  8 ++++----
 gcc/tsan.c                               |  2 +-
 gcc/ubsan.c                              |  2 +-
 gcc/value-prof.c                         |  6 +++---
 gcc/var-tracking.c                       | 16 ++++++++--------
 138 files changed, 363 insertions(+), 365 deletions(-)

diff --git a/gcc/asan.c b/gcc/asan.c
index 09c0667..a50186c 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -2043,7 +2043,7 @@ transform_statements (void)
   gimple_stmt_iterator i;
   int saved_last_basic_block = last_basic_block_for_fn (cfun);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       basic_block prev_bb = bb;
 
@@ -2557,7 +2557,7 @@ execute_sanopt (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
diff --git a/gcc/auto-inc-dec.c b/gcc/auto-inc-dec.c
index 6006b70..be7fdf8 100644
--- a/gcc/auto-inc-dec.c
+++ b/gcc/auto-inc-dec.c
@@ -1480,7 +1480,7 @@ rest_of_handle_auto_inc_dec (void)
   reg_next_use = XCNEWVEC (rtx, max_reg);
   reg_next_inc_use = XCNEWVEC (rtx, max_reg);
   reg_next_def = XCNEWVEC (rtx, max_reg);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     merge_in_block (max_reg, bb);
 
   free (reg_next_use);
diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 174b650..b378a5b 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -333,8 +333,6 @@ struct GTY(()) control_flow_graph {
 #define FOR_EACH_BB_FN(BB, FN) \
   FOR_BB_BETWEEN (BB, (FN)->cfg->x_entry_block_ptr->next_bb, (FN)->cfg->x_exit_block_ptr, next_bb)
 
-#define FOR_EACH_BB(BB) FOR_EACH_BB_FN (BB, cfun)
-
 #define FOR_EACH_BB_REVERSE_FN(BB, FN) \
   FOR_BB_BETWEEN (BB, (FN)->cfg->x_exit_block_ptr->prev_bb, (FN)->cfg->x_entry_block_ptr, prev_bb)
 
diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index 363af2d..7f8ea07 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -1566,7 +1566,7 @@ find_rarely_executed_basic_blocks_and_crossing_edges (void)
   vec<basic_block> bbs_in_hot_partition = vNULL;
 
   /* Mark which partition (hot/cold) each basic block belongs in.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bool cold_bb = false;
 
@@ -1658,7 +1658,7 @@ find_rarely_executed_basic_blocks_and_crossing_edges (void)
 
   /* Mark every edge that crosses between sections.  */
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_EACH_EDGE (e, ei, bb->succs)
       {
 	unsigned int flags = e->flags;
@@ -1691,7 +1691,7 @@ set_edge_can_fallthru_flag (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge e;
       edge_iterator ei;
@@ -1792,7 +1792,7 @@ fix_up_fall_thru_edges (void)
   rtx old_jump;
   rtx fall_thru_label;
 
-  FOR_EACH_BB (cur_bb)
+  FOR_EACH_BB_FN (cur_bb, cfun)
     {
       fall_thru = NULL;
       if (EDGE_COUNT (cur_bb->succs) > 0)
@@ -1992,7 +1992,7 @@ fix_crossing_conditional_branches (void)
   rtx old_label = NULL_RTX;
   rtx new_label;
 
-  FOR_EACH_BB (cur_bb)
+  FOR_EACH_BB_FN (cur_bb, cfun)
     {
       crossing_edge = NULL;
       if (EDGE_COUNT (cur_bb->succs) > 0)
@@ -2123,7 +2123,7 @@ fix_crossing_unconditional_branches (void)
   rtx cur_insn;
   edge succ;
 
-  FOR_EACH_BB (cur_bb)
+  FOR_EACH_BB_FN (cur_bb, cfun)
     {
       last_insn = BB_END (cur_bb);
 
@@ -2201,7 +2201,7 @@ add_reg_crossing_jump_notes (void)
   edge e;
   edge_iterator ei;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_EACH_EDGE (e, ei, bb->succs)
       if ((e->flags & EDGE_CROSSING)
 	  && JUMP_P (BB_END (e->src))
@@ -2286,7 +2286,7 @@ insert_section_boundary_note (void)
   if (!crtl->has_bb_partition)
     return;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (!current_partition)
 	current_partition = BB_PARTITION (bb);
@@ -2321,7 +2321,7 @@ rest_of_handle_reorder_blocks (void)
   reorder_basic_blocks ();
   cleanup_cfg (CLEANUP_EXPENSIVE);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
       bb->aux = bb->next_bb;
   cfg_layout_finalize ();
@@ -2410,7 +2410,7 @@ duplicate_computed_gotos (void)
   /* Look for blocks that end in a computed jump, and see if such blocks
      are suitable for unfactoring.  If a block is a candidate for unfactoring,
      mark it in the candidates.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       edge e;
@@ -2457,7 +2457,7 @@ duplicate_computed_gotos (void)
     goto done;
 
   /* Duplicate computed gotos.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (bb->flags & BB_VISITED)
 	continue;
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 6c3181d..4f9d769 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -101,7 +101,7 @@ clear_edges (void)
   edge e;
   edge_iterator ei;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_EACH_EDGE (e, ei, bb->succs)
 	free_edge (e);
@@ -163,7 +163,7 @@ compact_blocks (void)
       basic_block bb;
 
       i = NUM_FIXED_BLOCKS;
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  SET_BASIC_BLOCK_FOR_FN (cfun, i, bb);
 	  bb->index = i;
@@ -828,7 +828,7 @@ brief_dump_cfg (FILE *file, int flags)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       dump_bb_info (file, bb, 0,
 		    flags & (TDF_COMMENT | TDF_DETAILS),
diff --git a/gcc/cfganal.c b/gcc/cfganal.c
index 9900d82..3371b4a 100644
--- a/gcc/cfganal.c
+++ b/gcc/cfganal.c
@@ -159,7 +159,7 @@ find_unreachable_blocks (void)
 
   /* Clear all the reachability flags.  */
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bb->flags &= ~BB_REACHABLE;
 
   /* Add our starting points to the worklist.  Almost always there will
@@ -554,7 +554,7 @@ add_noreturn_fake_exit_edges (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (EDGE_COUNT (bb->succs) == 0)
       make_single_succ_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FAKE);
 }
@@ -1236,7 +1236,7 @@ compute_dominance_frontiers_1 (bitmap_head *frontiers)
   edge p;
   edge_iterator ei;
   basic_block b;
-  FOR_EACH_BB (b)
+  FOR_EACH_BB_FN (b, cfun)
     {
       if (EDGE_COUNT (b->preds) >= 2)
 	{
@@ -1517,7 +1517,7 @@ single_pred_before_succ_order (void)
   bitmap_clear (visited);
 
   MARK_VISITED (ENTRY_BLOCK_PTR_FOR_FN (cfun));
-  FOR_EACH_BB (x)
+  FOR_EACH_BB_FN (x, cfun)
     {
       if (VISITED_P (x))
 	continue;
diff --git a/gcc/cfgbuild.c b/gcc/cfgbuild.c
index f73bbc5..acfc73b 100644
--- a/gcc/cfgbuild.c
+++ b/gcc/cfgbuild.c
@@ -595,15 +595,15 @@ find_many_sub_basic_blocks (sbitmap blocks)
 {
   basic_block bb, min, max;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     SET_STATE (bb,
 	       bitmap_bit_p (blocks, bb->index) ? BLOCK_TO_SPLIT : BLOCK_ORIGINAL);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (STATE (bb) == BLOCK_TO_SPLIT)
       find_bb_boundaries (bb);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (STATE (bb) != BLOCK_ORIGINAL)
       break;
 
@@ -640,6 +640,6 @@ find_many_sub_basic_blocks (sbitmap blocks)
 	compute_outgoing_frequencies (bb);
       }
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     SET_STATE (bb, 0);
 }
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index 234e5b6..cf72c03 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -2613,7 +2613,7 @@ try_optimize_cfg (int mode)
 
   crossjumps_occured = false;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     update_forwarder_flag (bb);
 
   if (! targetm.cannot_modify_jumps_p ())
@@ -2955,7 +2955,7 @@ delete_dead_jumptables (void)
 
   /* A dead jump table does not belong to any basic block.  Scan insns
      between two adjacent basic blocks.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn, next;
 
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 014f78b..56bcd80 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -520,7 +520,7 @@ add_scope_conflicts (void)
 	}
     }
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     add_scope_conflicts_1 (bb, work, true);
 
   free (rpo);
@@ -5378,7 +5378,7 @@ discover_nonconstant_array_refs (void)
   basic_block bb;
   gimple_stmt_iterator gsi;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       {
 	gimple stmt = gsi_stmt (gsi);
diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
index 9d28950..5639e7a 100644
--- a/gcc/cfgloop.c
+++ b/gcc/cfgloop.c
@@ -50,7 +50,7 @@ flow_loops_cfg_dump (FILE *file)
   if (!file)
     return;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge succ;
       edge_iterator ei;
@@ -834,7 +834,7 @@ get_loop_body (const struct loop *loop)
       gcc_assert (loop->num_nodes == (unsigned) n_basic_blocks_for_fn (cfun));
       body[tv++] = loop->header;
       body[tv++] = EXIT_BLOCK_PTR_FOR_FN (cfun);
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	body[tv++] = bb;
     }
   else
@@ -1082,7 +1082,7 @@ record_loop_exits (void)
 					  loop_exit_hash, loop_exit_eq,
 					  loop_exit_free);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_EACH_EDGE (e, ei, bb->succs)
 	{
@@ -1343,7 +1343,7 @@ verify_loop_structure (void)
     verify_dominators (CDI_DOMINATORS);
 
   /* Check the headers.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb_loop_header_p (bb))
       {
 	if (bb->loop_father->header == NULL)
@@ -1479,7 +1479,7 @@ verify_loop_structure (void)
     {
       /* Record old info.  */
       irreds = sbitmap_alloc (last_basic_block_for_fn (cfun));
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  edge_iterator ei;
 	  if (bb->flags & BB_IRREDUCIBLE_LOOP)
@@ -1495,7 +1495,7 @@ verify_loop_structure (void)
       mark_irreducible_loops ();
 
       /* Compare.  */
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  edge_iterator ei;
 
@@ -1578,7 +1578,7 @@ verify_loop_structure (void)
 
       sizes = XCNEWVEC (unsigned, num);
       memset (sizes, 0, sizeof (unsigned) * num);
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  edge_iterator ei;
 	  if (bb->loop_father == current_loops->tree_root)
diff --git a/gcc/cfgloopanal.c b/gcc/cfgloopanal.c
index 84b61c1..5e89cb1c 100644
--- a/gcc/cfgloopanal.c
+++ b/gcc/cfgloopanal.c
@@ -432,7 +432,7 @@ mark_loop_exit_edges (void)
   if (number_of_loops (cfun) <= 1)
     return;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge_iterator ei;
 
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 5dc52a6..daadd9b 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -416,7 +416,7 @@ compute_bb_for_insn (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx end = BB_END (bb);
       rtx insn;
@@ -2275,7 +2275,7 @@ find_partition_fixes (bool flag_only)
   /* Callers check this.  */
   gcc_checking_assert (crtl->has_bb_partition);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if ((BB_PARTITION (bb) == BB_COLD_PARTITION))
       bbs_in_cold_partition.safe_push (bb);
 
@@ -2372,7 +2372,7 @@ verify_hot_cold_block_grouping (void)
       || current_ir_type () != IR_RTL_CFGRTL)
     return err;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (current_partition != BB_UNPARTITIONED
           && BB_PARTITION (bb) != current_partition)
@@ -3201,7 +3201,7 @@ purge_all_dead_edges (void)
   int purged = false;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bool purged_here = purge_dead_edges (bb);
 
@@ -3226,7 +3226,7 @@ fixup_abnormal_edges (void)
   bool inserted = false;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge e;
       edge_iterator ei;
@@ -3449,7 +3449,7 @@ record_effective_endpoints (void)
     cfg_layout_function_header = NULL_RTX;
 
   next_insn = get_insns ();
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx end;
 
@@ -3479,7 +3479,7 @@ outof_cfg_layout_mode (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
       bb->aux = bb->next_bb;
 
@@ -3857,7 +3857,7 @@ fixup_reorder_chain (void)
   relink_block_chain (/*stay_in_cfglayout_mode=*/false);
 
   /* Annoying special case - jump around dead jumptables left in the code.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge e = find_fallthru_edge (bb->succs);
 
@@ -3868,7 +3868,7 @@ fixup_reorder_chain (void)
   /* Ensure goto_locus from edges has some instructions with that locus
      in RTL.  */
   if (!optimize)
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       {
         edge e;
         edge_iterator ei;
@@ -4047,7 +4047,7 @@ force_one_exit_fallthru (void)
 
   /* Fix up the chain of blocks -- make FORWARDER immediately precede the
      exit block.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (bb->aux == NULL && bb != forwarder)
 	{
@@ -4258,7 +4258,7 @@ break_superblocks (void)
   superblocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (superblocks);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb->flags & BB_SUPERBLOCK)
       {
 	bb->flags &= ~BB_SUPERBLOCK;
diff --git a/gcc/cgraphbuild.c b/gcc/cgraphbuild.c
index 6c6698b..429dc8e 100644
--- a/gcc/cgraphbuild.c
+++ b/gcc/cgraphbuild.c
@@ -317,7 +317,7 @@ build_cgraph_edges (void)
 
   /* Create the callgraph edges and record the nodes referenced by the function.
      body.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
@@ -451,7 +451,7 @@ rebuild_cgraph_edges (void)
 
   node->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
@@ -505,7 +505,7 @@ cgraph_rebuild_references (void)
 
   node->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	ipa_record_stmt_references (node, gsi_stmt (gsi));
diff --git a/gcc/combine-stack-adj.c b/gcc/combine-stack-adj.c
index 5ca131f..5c897cf 100644
--- a/gcc/combine-stack-adj.c
+++ b/gcc/combine-stack-adj.c
@@ -95,7 +95,7 @@ combine_stack_adjustments (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     combine_stack_adjustments_for_block (bb);
 }
 
diff --git a/gcc/combine.c b/gcc/combine.c
index c7eb5e5..dea6c28 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -960,7 +960,7 @@ delete_noop_moves (void)
   rtx insn, next;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (insn = BB_HEAD (bb); insn != NEXT_INSN (BB_END (bb)); insn = next)
 	{
@@ -997,7 +997,7 @@ create_log_links (void)
      usage -- these are taken from original flow.c did. Don't ask me why it is
      done this way; I don't know and if it works, I don't want to know.  */
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_BB_INSNS_REVERSE (bb, insn)
         {
@@ -1160,7 +1160,7 @@ combine_instructions (rtx f, unsigned int nregs)
   last_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
   create_log_links ();
-  FOR_EACH_BB (this_basic_block)
+  FOR_EACH_BB_FN (this_basic_block, cfun)
     {
       optimize_this_for_speed_p = optimize_bb_for_speed_p (this_basic_block);
       last_call_luid = 0;
@@ -1211,7 +1211,7 @@ combine_instructions (rtx f, unsigned int nregs)
   setup_incoming_promotions (first);
   last_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
 
-  FOR_EACH_BB (this_basic_block)
+  FOR_EACH_BB_FN (this_basic_block, cfun)
     {
       rtx last_combined_insn = NULL_RTX;
       optimize_this_for_speed_p = optimize_bb_for_speed_p (this_basic_block);
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index b3a81b0..268e560 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -16548,7 +16548,7 @@ thumb1_reorg (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx dest, src;
       rtx pat, op0, set = NULL;
@@ -16626,7 +16626,7 @@ thumb2_reorg (void)
   compute_bb_for_insn ();
   df_analyze ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
 
diff --git a/gcc/config/bfin/bfin.c b/gcc/config/bfin/bfin.c
index a1adf80..c15451c 100644
--- a/gcc/config/bfin/bfin.c
+++ b/gcc/config/bfin/bfin.c
@@ -3957,7 +3957,7 @@ static void
 bfin_gen_bundles (void)
 {
   basic_block bb;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn, next;
       rtx slot[3];
@@ -4036,7 +4036,7 @@ static void
 reorder_var_tracking_notes (void)
 {
   basic_block bb;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn, next;
       rtx queue = NULL_RTX;
diff --git a/gcc/config/c6x/c6x.c b/gcc/config/c6x/c6x.c
index af310ba..6f80bc8 100644
--- a/gcc/config/c6x/c6x.c
+++ b/gcc/config/c6x/c6x.c
@@ -4629,7 +4629,7 @@ c6x_gen_bundles (void)
   basic_block bb;
   rtx insn, next, last_call;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn, next;
       /* The machine is eight insns wide.  We can have up to six shadow
@@ -5383,7 +5383,7 @@ conditionalize_after_sched (void)
 {
   basic_block bb;
   rtx insn;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       {
 	unsigned uid = INSN_UID (insn);
@@ -5959,7 +5959,7 @@ c6x_reorg (void)
 
   if (c6x_flag_schedule_insns2)
     {
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	if ((bb->flags & BB_DISABLE_SCHEDULE) == 0)
 	  assign_reservations (BB_HEAD (bb), BB_END (bb));
     }
diff --git a/gcc/config/epiphany/resolve-sw-modes.c b/gcc/config/epiphany/resolve-sw-modes.c
index a780254..30f6920 100644
--- a/gcc/config/epiphany/resolve-sw-modes.c
+++ b/gcc/config/epiphany/resolve-sw-modes.c
@@ -69,7 +69,7 @@ resolve_sw_modes (void)
       df_note_add_problem ();
       df_analyze ();
     }
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       {
 	enum attr_fp_mode selected_mode;
diff --git a/gcc/config/frv/frv.c b/gcc/config/frv/frv.c
index a5aeb75..3755e62 100644
--- a/gcc/config/frv/frv.c
+++ b/gcc/config/frv/frv.c
@@ -8070,11 +8070,11 @@ frv_optimize_membar (void)
   first_io = XCNEWVEC (struct frv_io, last_basic_block_for_fn (cfun));
   last_membar = XCNEWVEC (rtx, last_basic_block_for_fn (cfun));
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     frv_optimize_membar_local (bb, &first_io[bb->index],
 			       &last_membar[bb->index]);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (last_membar[bb->index] != 0)
       frv_optimize_membar_global (bb, first_io, last_membar[bb->index]);
 
diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index 0f6612d..aa9694f 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -10481,7 +10481,7 @@ ix86_finalize_stack_realign_flags (void)
       add_to_hard_reg_set (&set_up_by_prologue, Pmode, ARG_POINTER_REGNUM);
       add_to_hard_reg_set (&set_up_by_prologue, Pmode,
 			   HARD_FRAME_POINTER_REGNUM);
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
         {
           rtx insn;
 	  FOR_BB_INSNS (bb, insn)
diff --git a/gcc/config/ia64/ia64.c b/gcc/config/ia64/ia64.c
index 8f305c1..a837974 100644
--- a/gcc/config/ia64/ia64.c
+++ b/gcc/config/ia64/ia64.c
@@ -9688,7 +9688,7 @@ ia64_reorg (void)
 
       /* We can't let modulo-sched prevent us from scheduling any bbs,
 	 since we need the final schedule to produce bundle information.  */
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	bb->flags &= ~BB_DISABLE_SCHEDULE;
 
       initiate_bundle_states ();
diff --git a/gcc/config/mips/mips.c b/gcc/config/mips/mips.c
index f19478c..e65dc6b 100644
--- a/gcc/config/mips/mips.c
+++ b/gcc/config/mips/mips.c
@@ -15332,7 +15332,7 @@ mips_annotate_pic_calls (void)
   basic_block bb;
   rtx insn;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
     {
       rtx call, reg, symbol, second_call;
diff --git a/gcc/config/picochip/picochip.c b/gcc/config/picochip/picochip.c
index 4756cb7..8861ffc 100644
--- a/gcc/config/picochip/picochip.c
+++ b/gcc/config/picochip/picochip.c
@@ -3174,7 +3174,7 @@ reorder_var_tracking_notes (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn, next, last_insn = NULL_RTX;
       rtx queue = NULL_RTX;
diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c
index 599cf49..1db97fa 100644
--- a/gcc/config/rs6000/rs6000.c
+++ b/gcc/config/rs6000/rs6000.c
@@ -16395,7 +16395,7 @@ rs6000_alloc_sdmode_stack_slot (void)
   if (TARGET_NO_SDMODE_STACK)
     return;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       {
 	tree ret = walk_gimple_op (gsi_stmt (gsi), rs6000_check_sdmode, NULL);
diff --git a/gcc/config/s390/s390.c b/gcc/config/s390/s390.c
index fcd7532..f9b7cd0 100644
--- a/gcc/config/s390/s390.c
+++ b/gcc/config/s390/s390.c
@@ -7458,7 +7458,7 @@ s390_regs_ever_clobbered (char regs_ever_clobbered[])
       if (!call_really_used_regs[i])
 	regs_ever_clobbered[i] = 1;
 
-  FOR_EACH_BB (cur_bb)
+  FOR_EACH_BB_FN (cur_bb, cfun)
     {
       FOR_BB_INSNS (cur_bb, cur_insn)
 	{
diff --git a/gcc/config/spu/spu.c b/gcc/config/spu/spu.c
index 1a9895e..66209b6 100644
--- a/gcc/config/spu/spu.c
+++ b/gcc/config/spu/spu.c
@@ -2645,7 +2645,7 @@ spu_machine_dependent_reorg (void)
     find_many_sub_basic_blocks (blocks);
 
   /* We have to schedule to make sure alignment is ok. */
-  FOR_EACH_BB (bb) bb->flags &= ~BB_DISABLE_SCHEDULE;
+  FOR_EACH_BB_FN (bb, cfun) bb->flags &= ~BB_DISABLE_SCHEDULE;
 
   /* The hints need to be scheduled, so call it again. */
   schedule_insns ();
diff --git a/gcc/config/tilegx/tilegx.c b/gcc/config/tilegx/tilegx.c
index c2f9e07..eecc9a9 100644
--- a/gcc/config/tilegx/tilegx.c
+++ b/gcc/config/tilegx/tilegx.c
@@ -4383,7 +4383,7 @@ static void
 tilegx_gen_bundles (void)
 {
   basic_block bb;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn, next;
       rtx end = NEXT_INSN (BB_END (bb));
@@ -4709,7 +4709,7 @@ static void
 reorder_var_tracking_notes (void)
 {
   basic_block bb;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
   {
     rtx insn, next;
     rtx queue = NULL_RTX;
diff --git a/gcc/config/tilepro/tilepro.c b/gcc/config/tilepro/tilepro.c
index 31bc490..b2bafb4 100644
--- a/gcc/config/tilepro/tilepro.c
+++ b/gcc/config/tilepro/tilepro.c
@@ -3988,7 +3988,7 @@ static void
 tilepro_gen_bundles (void)
 {
   basic_block bb;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
   {
     rtx insn, next;
     rtx end = NEXT_INSN (BB_END (bb));
@@ -4259,7 +4259,7 @@ static void
 reorder_var_tracking_notes (void)
 {
   basic_block bb;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
   {
     rtx insn, next;
     rtx queue = NULL_RTX;
diff --git a/gcc/coverage.c b/gcc/coverage.c
index f2ac5fc..f7a2924 100644
--- a/gcc/coverage.c
+++ b/gcc/coverage.c
@@ -588,7 +588,7 @@ coverage_compute_cfg_checksum (void)
   basic_block bb;
   unsigned chksum = n_basic_blocks_for_fn (cfun);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge e;
       edge_iterator ei;
diff --git a/gcc/cprop.c b/gcc/cprop.c
index 600c617..7d07246 100644
--- a/gcc/cprop.c
+++ b/gcc/cprop.c
@@ -400,7 +400,7 @@ compute_hash_table_work (struct hash_table_d *table)
   /* Allocate vars to track sets of regs.  */
   reg_set_bitmap = ALLOC_REG_SET (NULL);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
 
@@ -649,7 +649,7 @@ compute_cprop_data (void)
      aren't recorded for the local pass so they cannot be propagated within
      their basic block by this pass and 2) the global pass would otherwise
      propagate them only in the successors of their basic block.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       int index = implicit_set_indexes[bb->index];
       if (index != -1)
@@ -1234,7 +1234,7 @@ local_cprop_pass (void)
   unsigned i;
 
   cselib_init (0);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_BB_INSNS (bb, insn)
 	{
@@ -1359,7 +1359,7 @@ find_implicit_sets (void)
 
   implicit_sets = XCNEWVEC (rtx, implicit_sets_size);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* Check for more than one successor.  */
       if (EDGE_COUNT (bb->succs) <= 1)
diff --git a/gcc/cse.c b/gcc/cse.c
index 74ae8ba..0e28f48 100644
--- a/gcc/cse.c
+++ b/gcc/cse.c
@@ -7335,7 +7335,7 @@ cse_condition_code_reg (void)
   else
     cc_reg_2 = NULL_RTX;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx last_insn;
       rtx cc_reg;
diff --git a/gcc/dce.c b/gcc/dce.c
index 07d31f7..3101102 100644
--- a/gcc/dce.c
+++ b/gcc/dce.c
@@ -623,7 +623,7 @@ prescan_insns_for_dce (bool fast)
   if (!df_in_progress && ACCUMULATE_OUTGOING_ARGS)
     arg_stores = BITMAP_ALLOC (NULL);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_BB_INSNS_REVERSE_SAFE (bb, insn, prev)
 	if (NONDEBUG_INSN_P (insn))
diff --git a/gcc/df-core.c b/gcc/df-core.c
index d41fb72..ba57d39 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -1543,7 +1543,7 @@ df_compact_blocks (void)
 	    bitmap_set_bit (dflow->out_of_date_transfer_functions, EXIT_BLOCK);
 
 	  i = NUM_FIXED_BLOCKS;
-	  FOR_EACH_BB (bb)
+	  FOR_EACH_BB_FN (bb, cfun)
 	    {
 	      if (bitmap_bit_p (&tmp, bb->index))
 		bitmap_set_bit (dflow->out_of_date_transfer_functions, i);
@@ -1564,7 +1564,7 @@ df_compact_blocks (void)
 	     place in the block_info vector.  Null out the copied
 	     item.  The entry and exit blocks never move.  */
 	  i = NUM_FIXED_BLOCKS;
-	  FOR_EACH_BB (bb)
+	  FOR_EACH_BB_FN (bb, cfun)
 	    {
 	      df_set_bb_info (dflow, i,
 			      (char *)problem_temps
@@ -1590,7 +1590,7 @@ df_compact_blocks (void)
       bitmap_copy (&tmp, df->blocks_to_analyze);
       bitmap_clear (df->blocks_to_analyze);
       i = NUM_FIXED_BLOCKS;
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  if (bitmap_bit_p (&tmp, bb->index))
 	    bitmap_set_bit (df->blocks_to_analyze, i);
@@ -1601,7 +1601,7 @@ df_compact_blocks (void)
   bitmap_clear (&tmp);
 
   i = NUM_FIXED_BLOCKS;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       SET_BASIC_BLOCK_FOR_FN (cfun, i, bb);
       bb->index = i;
diff --git a/gcc/df-problems.c b/gcc/df-problems.c
index ab19372..70f7254 100644
--- a/gcc/df-problems.c
+++ b/gcc/df-problems.c
@@ -2427,7 +2427,7 @@ df_word_lr_alloc (bitmap all_blocks ATTRIBUTE_UNUSED)
 
   bitmap_obstack_initialize (&problem_data->word_lr_bitmaps);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bitmap_set_bit (df_word_lr->out_of_date_transfer_functions, bb->index);
 
   bitmap_set_bit (df_word_lr->out_of_date_transfer_functions, ENTRY_BLOCK);
diff --git a/gcc/df-scan.c b/gcc/df-scan.c
index 5f0ba4a..9f6f67a 100644
--- a/gcc/df-scan.c
+++ b/gcc/df-scan.c
@@ -449,7 +449,7 @@ df_scan_start_dump (FILE *file ATTRIBUTE_UNUSED)
 	fprintf (file, "} ");
       }
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       if (INSN_P (insn))
 	{
@@ -673,7 +673,7 @@ df_scan_blocks (void)
   df_set_bb_dirty (BASIC_BLOCK_FOR_FN (cfun, EXIT_BLOCK));
 
   /* Regular blocks */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       unsigned int bb_index = bb->index;
       df_bb_refs_record (bb_index, true);
@@ -1415,7 +1415,7 @@ df_insn_rescan_all (void)
   bitmap_clear (&df->insns_to_rescan);
   bitmap_clear (&df->insns_to_notes_rescan);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       FOR_BB_INSNS (bb, insn)
@@ -4154,7 +4154,7 @@ df_update_entry_exit_and_calls (void)
 
   /* The call insns need to be rescanned because there may be changes
      in the set of registers clobbered across the call.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       FOR_BB_INSNS (bb, insn)
diff --git a/gcc/dominance.c b/gcc/dominance.c
index af73078..521b224 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -662,7 +662,7 @@ calculate_dominance_info (enum cdi_direction dir)
       calc_dfs_tree (&di, reverse);
       calc_idoms (&di, reverse);
 
-      FOR_EACH_BB (b)
+      FOR_EACH_BB_FN (b, cfun)
 	{
 	  TBB d = di.dom[di.dfs_order[b->index]];
 
@@ -1025,7 +1025,7 @@ verify_dominators (enum cdi_direction dir)
   calc_dfs_tree (&di, reverse);
   calc_idoms (&di, reverse);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       imm_bb = get_immediate_dominator (dir, bb);
       if (!imm_bb)
@@ -1492,7 +1492,7 @@ DEBUG_FUNCTION void
 debug_dominance_info (enum cdi_direction dir)
 {
   basic_block bb, bb2;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if ((bb2 = get_immediate_dominator (dir, bb)))
       fprintf (stderr, "%i %i\n", bb->index, bb2->index);
 }
diff --git a/gcc/dse.c b/gcc/dse.c
index a926cb8..e5b0850 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -3507,7 +3507,7 @@ static void
 dse_step5_nospill (void)
 {
   basic_block bb;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bb_info_t bb_info = bb_table[bb->index];
       insn_info_t insn_info = bb_info->last_insn;
diff --git a/gcc/except.c b/gcc/except.c
index e4b8cad..cf4fd14 100644
--- a/gcc/except.c
+++ b/gcc/except.c
@@ -1511,7 +1511,7 @@ finish_eh_generation (void)
     commit_edge_insertions ();
 
   /* Redirect all EH edges from the post_landing_pad to the landing pad.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       eh_landing_pad lp;
       edge_iterator ei;
diff --git a/gcc/final.c b/gcc/final.c
index 2ab6a4d..f475d27 100644
--- a/gcc/final.c
+++ b/gcc/final.c
@@ -700,14 +700,14 @@ compute_alignments (void)
       flow_loops_dump (dump_file, NULL, 1);
     }
   loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb->frequency > freq_max)
       freq_max = bb->frequency;
   freq_threshold = freq_max / PARAM_VALUE (PARAM_ALIGN_THRESHOLD);
 
   if (dump_file)
     fprintf (dump_file, "freq_max: %i\n",freq_max);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx label = BB_HEAD (bb);
       int fallthru_frequency = 0, branch_frequency = 0, has_fallthru = 0;
diff --git a/gcc/function.c b/gcc/function.c
index d257af4..e00f583 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -6043,7 +6043,7 @@ thread_prologue_and_epilogue_insns (void)
       max_grow_size = get_uncond_jump_length ();
       max_grow_size *= PARAM_VALUE (PARAM_MAX_GROW_COPY_BB_INSNS);
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  rtx insn;
 	  unsigned size = 0;
@@ -6120,7 +6120,7 @@ thread_prologue_and_epilogue_insns (void)
 	 needing a prologue.  */
       bitmap_clear (&bb_on_list);
       bitmap_and_compl (&bb_antic_flags, &bb_flags, &bb_tail);
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  if (!bitmap_bit_p (&bb_antic_flags, bb->index))
 	    continue;
@@ -6154,7 +6154,7 @@ thread_prologue_and_epilogue_insns (void)
       /* Find exactly one edge that leads to a block in ANTIC from
 	 a block that isn't.  */
       if (!bitmap_bit_p (&bb_antic_flags, entry_edge->dest->index))
-	FOR_EACH_BB (bb)
+	FOR_EACH_BB_FN (bb, cfun)
 	  {
 	    if (!bitmap_bit_p (&bb_antic_flags, bb->index))
 	      continue;
@@ -6202,7 +6202,7 @@ thread_prologue_and_epilogue_insns (void)
 	  /* Find tail blocks reachable from both blocks needing a
 	     prologue and blocks not needing a prologue.  */
 	  if (!bitmap_empty_p (&bb_tail))
-	    FOR_EACH_BB (bb)
+	    FOR_EACH_BB_FN (bb, cfun)
 	      {
 		bool some_pro, some_no_pro;
 		if (!bitmap_bit_p (&bb_tail, bb->index))
@@ -6480,7 +6480,7 @@ thread_prologue_and_epilogue_insns (void)
 	 we take advantage of cfg_layout_finalize using
 	 fixup_fallthru_exit_predecessor.  */
       cfg_layout_initialize (0);
-      FOR_EACH_BB (cur_bb)
+      FOR_EACH_BB_FN (cur_bb, cfun)
 	if (cur_bb->index >= NUM_FIXED_BLOCKS
 	    && cur_bb->next_bb->index >= NUM_FIXED_BLOCKS)
 	  cur_bb->aux = cur_bb->next_bb;
@@ -7192,7 +7192,7 @@ rest_of_match_asm_constraints (void)
     return 0;
 
   df_set_flags (DF_DEFER_INSN_RESCAN);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_BB_INSNS (bb, insn)
 	{
diff --git a/gcc/gcse.c b/gcc/gcse.c
index fa25a46..a6874ab 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -1559,7 +1559,7 @@ compute_hash_table_work (struct hash_table_d *table)
   for (i = 0; i < max_reg_num (); ++i)
     reg_avail_info[i].last_bb = NULL;
 
-  FOR_EACH_BB (current_bb)
+  FOR_EACH_BB_FN (current_bb, cfun)
     {
       rtx insn;
       unsigned int regno;
@@ -1899,7 +1899,7 @@ prune_expressions (bool pre_p)
 	}
     }
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge e;
       edge_iterator ei;
@@ -2020,7 +2020,7 @@ compute_pre_data (void)
      ~(TRANSP | COMP)
   */
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bitmap_ior (ae_kill[bb->index], transp[bb->index], comp[bb->index]);
       bitmap_not (ae_kill[bb->index], ae_kill[bb->index]);
@@ -2855,7 +2855,7 @@ compute_code_hoist_vbeinout (void)
     {
       fprintf (dump_file, "hoisting vbeinout computation: %d passes\n", passes);
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
         {
 	  fprintf (dump_file, "vbein (%d): ", bb->index);
 	  dump_bitmap_file (dump_file, hoist_vbein[bb->index]);
@@ -3169,7 +3169,7 @@ hoist_code (void)
   to_bb_head = XCNEWVEC (int, get_max_uid ());
   bb_size = XCNEWVEC (int, last_basic_block_for_fn (cfun));
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       int to_head;
@@ -3512,7 +3512,7 @@ calculate_bb_reg_pressure (void)
 
   ira_setup_eliminable_regset ();
   curr_regs_live = BITMAP_ALLOC (&reg_obstack);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       curr_bb = bb;
       BB_DATA (bb)->live_in = BITMAP_ALLOC (NULL);
@@ -3562,7 +3562,7 @@ calculate_bb_reg_pressure (void)
     return;
 
   fprintf (dump_file, "\nRegister Pressure: \n");
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       fprintf (dump_file, "  Basic block %d: \n", bb->index);
       for (i = 0; (int) i < ira_pressure_classes_num; i++)
@@ -3888,7 +3888,7 @@ compute_ld_motion_mems (void)
   pre_ldst_mems = NULL;
   pre_ldst_table.create (13);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_BB_INSNS (bb, insn)
 	{
diff --git a/gcc/gimple-iterator.c b/gcc/gimple-iterator.c
index 9f51e6c..2460c61 100644
--- a/gcc/gimple-iterator.c
+++ b/gcc/gimple-iterator.c
@@ -839,7 +839,7 @@ gsi_commit_edge_inserts (void)
   gsi_commit_one_edge_insert (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
 			      NULL);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_EACH_EDGE (e, ei, bb->succs)
       gsi_commit_one_edge_insert (e, NULL);
 }
diff --git a/gcc/gimple-ssa-isolate-paths.c b/gcc/gimple-ssa-isolate-paths.c
index 052bf3f..aaa7537 100644
--- a/gcc/gimple-ssa-isolate-paths.c
+++ b/gcc/gimple-ssa-isolate-paths.c
@@ -216,7 +216,7 @@ find_implicit_erroneous_behaviour (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator si;
 
@@ -304,7 +304,7 @@ find_explicit_erroneous_behaviour (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator si;
 
diff --git a/gcc/graphite-sese-to-poly.c b/gcc/graphite-sese-to-poly.c
index 975db63..66c1b6e 100644
--- a/gcc/graphite-sese-to-poly.c
+++ b/gcc/graphite-sese-to-poly.c
@@ -2295,7 +2295,7 @@ rewrite_reductions_out_of_ssa (scop_p scop)
   gimple_stmt_iterator psi;
   sese region = SCOP_REGION (scop);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb_in_sese_p (bb, region))
       for (psi = gsi_start_phis (bb); !gsi_end_p (psi);)
 	{
@@ -2489,7 +2489,7 @@ rewrite_cross_bb_scalar_deps_out_of_ssa (scop_p scop)
   /* Create an extra empty BB after the scop.  */
   split_edge (SESE_EXIT (region));
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb_in_sese_p (bb, region))
       for (psi = gsi_start_bb (bb); !gsi_end_p (psi); gsi_next (&psi))
 	changed |= rewrite_cross_bb_scalar_deps (scop, &psi);
diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index d5e3309..4f3b054 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -6709,7 +6709,7 @@ haifa_sched_init (void)
 
     sched_init_bbs ();
 
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       bbs.quick_push (bb);
     sched_init_luids (bbs);
     sched_deps_init (true);
diff --git a/gcc/hw-doloop.c b/gcc/hw-doloop.c
index 77c8149..b6184a2 100644
--- a/gcc/hw-doloop.c
+++ b/gcc/hw-doloop.c
@@ -357,7 +357,7 @@ discover_loops (bitmap_obstack *loop_stack, struct hw_doloop_hooks *hooks)
   /* Find all the possible loop tails.  This means searching for every
      loop_end instruction.  For each one found, create a hwloop_info
      structure and add the head block to the work list. */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx tail = BB_END (bb);
       rtx insn, reg;
@@ -480,7 +480,7 @@ set_bb_indices (void)
   intptr_t index;
 
   index = 0;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bb->aux = (void *) index++;
 }
 
@@ -537,7 +537,7 @@ reorder_loops (hwloop_info loops)
       loops = loops->next;
     }
   
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
 	bb->aux = bb->next_bb;
diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c
index ac0276c..543a70d 100644
--- a/gcc/ifcvt.c
+++ b/gcc/ifcvt.c
@@ -4408,7 +4408,7 @@ if_convert (bool after_combine)
 	fprintf (dump_file, "\n\n========== Pass %d ==========\n", pass);
 #endif
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
           basic_block new_bb;
           while (!df_get_bb_dirty (bb)
diff --git a/gcc/init-regs.c b/gcc/init-regs.c
index 2a15b3e..d26ee9b 100644
--- a/gcc/init-regs.c
+++ b/gcc/init-regs.c
@@ -59,7 +59,7 @@ initialize_uninitialized_regs (void)
 
   df_analyze ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       bitmap lr = DF_LR_IN (bb);
diff --git a/gcc/ipa-prop.c b/gcc/ipa-prop.c
index 83dc53e..7b16b7e 100644
--- a/gcc/ipa-prop.c
+++ b/gcc/ipa-prop.c
@@ -4726,7 +4726,7 @@ ipcp_transform_function (struct cgraph_node *node)
   descriptors.safe_grow_cleared (param_count);
   ipa_populate_param_decls (node, descriptors);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       {
 	struct ipa_agg_replacement_value *v;
diff --git a/gcc/ipa-pure-const.c b/gcc/ipa-pure-const.c
index d84b35f..a60e078 100644
--- a/gcc/ipa-pure-const.c
+++ b/gcc/ipa-pure-const.c
@@ -754,7 +754,7 @@ analyze_function (struct cgraph_node *fn, bool ipa)
 
   push_cfun (DECL_STRUCT_FUNCTION (decl));
 
-  FOR_EACH_BB (this_block)
+  FOR_EACH_BB_FN (this_block, cfun)
     {
       gimple_stmt_iterator gsi;
       struct walk_stmt_info wi;
diff --git a/gcc/ipa-split.c b/gcc/ipa-split.c
index d5dfb8d..390adf1 100644
--- a/gcc/ipa-split.c
+++ b/gcc/ipa-split.c
@@ -1070,7 +1070,7 @@ find_split_points (int overall_time, int overall_size)
         stack.pop ();
     }
   ENTRY_BLOCK_PTR_FOR_FN (cfun)->aux = NULL;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bb->aux = NULL;
   stack.release ();
   BITMAP_FREE (current.ssa_names_to_pass);
@@ -1595,7 +1595,7 @@ execute_split_functions (void)
   /* Compute local info about basic blocks and determine function size/time.  */
   bb_info_vec.safe_grow_cleared (last_basic_block_for_fn (cfun) + 1);
   memset (&best_split_point, 0, sizeof (best_split_point));
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       int time = 0;
       int size = 0;
diff --git a/gcc/ira-build.c b/gcc/ira-build.c
index f9258ee..660fb0d 100644
--- a/gcc/ira-build.c
+++ b/gcc/ira-build.c
@@ -341,7 +341,7 @@ form_loop_tree (void)
   /* We can not use loop/bb node access macros because of potential
      checking and because the nodes are not initialized enough
      yet.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bb_node = &ira_bb_nodes[bb->index];
       bb_node->bb = bb;
diff --git a/gcc/ira-costs.c b/gcc/ira-costs.c
index d7299e6..c8d64d5 100644
--- a/gcc/ira-costs.c
+++ b/gcc/ira-costs.c
@@ -1585,7 +1585,7 @@ find_costs_and_classes (FILE *dump_file)
 	{
 	  basic_block bb;
 
-	  FOR_EACH_BB (bb)
+	  FOR_EACH_BB_FN (bb, cfun)
 	    process_bb_for_costs (bb);
 	}
 
diff --git a/gcc/ira-emit.c b/gcc/ira-emit.c
index d59461b..196efa0 100644
--- a/gcc/ira-emit.c
+++ b/gcc/ira-emit.c
@@ -986,7 +986,7 @@ emit_moves (void)
   edge e;
   rtx insns, tmp;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (at_bb_start[bb->index] != NULL)
 	{
@@ -1203,7 +1203,7 @@ add_ranges_and_copies (void)
   bitmap live_through;
 
   live_through = ira_allocate_bitmap ();
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* It does not matter what loop_tree_node (of source or
 	 destination block) to use for searching allocnos by their
@@ -1260,7 +1260,7 @@ ira_emit (bool loops_p)
   ira_free_bitmap (renamed_regno_bitmap);
   ira_free_bitmap (local_allocno_bitmap);
   setup_entered_from_non_parent_p ();
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       at_bb_start[bb->index] = NULL;
       at_bb_end[bb->index] = NULL;
@@ -1275,15 +1275,15 @@ ira_emit (bool loops_p)
   memset (allocno_last_set_check, 0, sizeof (int) * max_reg_num ());
   memset (hard_regno_last_set_check, 0, sizeof (hard_regno_last_set_check));
   curr_tick = 0;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     unify_moves (bb, true);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     unify_moves (bb, false);
   move_vec.create (ira_allocnos_num);
   emit_moves ();
   add_ranges_and_copies ();
   /* Clean up: */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       free_move_list (at_bb_start[bb->index]);
       free_move_list (at_bb_end[bb->index]);
@@ -1301,7 +1301,7 @@ ira_emit (bool loops_p)
      reload assumes initial insn codes defined.  The insn codes can be
      invalidated by CFG infrastructure for example in jump
      redirection.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS_REVERSE (bb, insn)
       if (INSN_P (insn))
 	recog_memoized (insn);
diff --git a/gcc/ira.c b/gcc/ira.c
index ae35035..b4ae0ca 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -2135,7 +2135,7 @@ decrease_live_ranges_number (void)
   if (ira_dump_file)
     fprintf (ira_dump_file, "Starting decreasing number of live ranges...\n");
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       {
 	set = single_set (insn);
@@ -2358,7 +2358,7 @@ compute_regs_asm_clobbered (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       FOR_BB_INSNS_REVERSE (bb, insn)
@@ -2951,7 +2951,7 @@ mark_elimination (int from, int to)
   basic_block bb;
   bitmap r;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       r = DF_LR_IN (bb);
       if (bitmap_bit_p (r, from))
@@ -3473,7 +3473,7 @@ update_equiv_regs (void)
      paradoxical subreg. Don't set such reg sequivalent to a mem,
      because lra will not substitute such equiv memory in order to
      prevent access beyond allocated memory for paradoxical memory subreg.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       if (NONDEBUG_INSN_P (insn))
 	for_each_rtx (&insn, set_paradoxical_subreg, (void *) pdx_subregs);
@@ -3481,7 +3481,7 @@ update_equiv_regs (void)
   /* Scan the insns and find which registers have equivalences.  Do this
      in a separate scan of the insns because (due to -fcse-follow-jumps)
      a register can be set below its use.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       loop_depth = bb_loop_depth (bb);
 
@@ -3905,7 +3905,7 @@ update_equiv_regs (void)
 
   if (!bitmap_empty_p (cleared_regs))
     {
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
 	  bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
@@ -4532,7 +4532,7 @@ find_moveable_pseudos (void)
   bitmap_initialize (&used, 0);
   bitmap_initialize (&set, 0);
   bitmap_initialize (&unusable_as_input, 0);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       bitmap transp = bb_transp_live + bb->index;
@@ -4595,7 +4595,7 @@ find_moveable_pseudos (void)
   bitmap_clear (&used);
   bitmap_clear (&set);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bitmap local = bb_local + bb->index;
       rtx insn;
@@ -4824,7 +4824,7 @@ find_moveable_pseudos (void)
 	}
     }
   
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bitmap_clear (bb_local + bb->index);
       bitmap_clear (bb_transp_live + bb->index);
@@ -4921,7 +4921,7 @@ split_live_ranges_for_shrink_wrap (void)
   bitmap_initialize (&reachable, 0);
   queue.create (n_basic_blocks_for_fn (cfun));
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       if (CALL_P (insn) && !SIBLING_CALL_P (insn))
 	{
@@ -5145,7 +5145,7 @@ allocate_initial_values (void)
 		     fixed regs are accepted.  */
 		  SET_REGNO (preg, new_regno);
 		  /* Update global register liveness information.  */
-		  FOR_EACH_BB (bb)
+		  FOR_EACH_BB_FN (bb, cfun)
 		    {
 		      if (REGNO_REG_SET_P (df_get_live_in (bb), regno))
 			SET_REGNO_REG_SET (df_get_live_in (bb), new_regno);
diff --git a/gcc/jump.c b/gcc/jump.c
index a27aaa9..5eefeef 100644
--- a/gcc/jump.c
+++ b/gcc/jump.c
@@ -275,7 +275,7 @@ mark_all_labels (rtx f)
   if (current_ir_type () == IR_RTL_CFGLAYOUT)
     {
       basic_block bb;
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  /* In cfglayout mode, we don't bother with trivial next-insn
 	     propagation of LABEL_REFs into JUMP_LABEL.  This will be
diff --git a/gcc/lcm.c b/gcc/lcm.c
index 1129d6c..0b528d9 100644
--- a/gcc/lcm.c
+++ b/gcc/lcm.c
@@ -281,7 +281,7 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
 
   /* Add all the blocks to the worklist.  This prevents an early exit from
      the loop given our optimistic initialization of LATER above.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       *qin++ = bb;
       bb->aux = bb;
@@ -350,7 +350,7 @@ compute_insert_delete (struct edge_list *edge_list, sbitmap *antloc,
   int x;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bitmap_and_compl (del[bb->index], antloc[bb->index],
 			laterin[bb->index]);
 
@@ -497,7 +497,7 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
 
   /* Put every block on the worklist; this is necessary because of the
      optimistic initialization of AVOUT above.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       *qin++ = bb;
       bb->aux = bb;
@@ -638,7 +638,7 @@ compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
 
   /* Add all the blocks to the worklist.  This prevents an early exit
      from the loop given our optimistic initialization of NEARER.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       *tos++ = bb;
       bb->aux = bb;
@@ -695,7 +695,7 @@ compute_rev_insert_delete (struct edge_list *edge_list, sbitmap *st_avloc,
   int x;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bitmap_and_compl (del[bb->index], st_avloc[bb->index],
 			nearerout[bb->index]);
 
diff --git a/gcc/loop-init.c b/gcc/loop-init.c
index 664b1ac..3dc6953 100644
--- a/gcc/loop-init.c
+++ b/gcc/loop-init.c
@@ -213,7 +213,7 @@ fix_loop_structure (bitmap changed_bbs)
   /* Remember the depth of the blocks in the loop hierarchy, so that we can
      recognize blocks whose loop nesting relationship has changed.  */
   if (changed_bbs)
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       bb->aux = (void *) (size_t) loop_depth (bb->loop_father);
 
   /* Remove the dead loops from structures.  We start from the innermost
@@ -256,7 +256,7 @@ fix_loop_structure (bitmap changed_bbs)
   /* Mark the blocks whose loop has changed.  */
   if (changed_bbs)
     {
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  if ((void *) (size_t) loop_depth (bb->loop_father) != bb->aux)
 	    bitmap_set_bit (changed_bbs, bb->index);
diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c
index 9f1fc07..f47bd50 100644
--- a/gcc/loop-invariant.c
+++ b/gcc/loop-invariant.c
@@ -1825,7 +1825,7 @@ calculate_loop_reg_pressure (void)
       }
   ira_setup_eliminable_regset ();
   bitmap_initialize (&curr_regs_live, &reg_obstack);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       curr_loop = bb->loop_father;
       if (curr_loop == current_loops->tree_root)
diff --git a/gcc/lower-subreg.c b/gcc/lower-subreg.c
index 60c47b9..0b0e397 100644
--- a/gcc/lower-subreg.c
+++ b/gcc/lower-subreg.c
@@ -1463,7 +1463,7 @@ decompose_multiword_subregs (bool decompose_copies)
   memset (reg_copy_graph.address (), 0, sizeof (bitmap) * max);
 
   speed_p = optimize_function_for_speed_p (cfun);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
 
@@ -1543,7 +1543,7 @@ decompose_multiword_subregs (bool decompose_copies)
       EXECUTE_IF_SET_IN_BITMAP (decomposable_context, 0, regno, iter)
 	decompose_register (regno);
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  rtx insn;
 
diff --git a/gcc/lra-assigns.c b/gcc/lra-assigns.c
index 88fc693..41ee286 100644
--- a/gcc/lra-assigns.c
+++ b/gcc/lra-assigns.c
@@ -1302,7 +1302,7 @@ assign_by_spills (void)
 
       /* FIXME: Look up the changed insns in the cached LRA insn data using
 	 an EXECUTE_IF_SET_IN_BITMAP over changed_insns.  */
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	FOR_BB_INSNS (bb, insn)
 	if (bitmap_bit_p (&changed_insns, INSN_UID (insn)))
 	  {
diff --git a/gcc/lra-coalesce.c b/gcc/lra-coalesce.c
index 859e02f..94a21f0 100644
--- a/gcc/lra-coalesce.c
+++ b/gcc/lra-coalesce.c
@@ -239,7 +239,7 @@ lra_coalesce (void)
   mv_num = 0;
   /* Collect moves.  */
   coalesced_moves = 0;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_BB_INSNS_SAFE (bb, insn, next)
 	if (INSN_P (insn)
@@ -297,7 +297,7 @@ lra_coalesce (void)
 	}
     }
   bitmap_initialize (&used_pseudos_bitmap, &reg_obstack);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       update_live_info (df_get_live_in (bb));
       update_live_info (df_get_live_out (bb));
diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
index bb5242a..f04166c 100644
--- a/gcc/lra-constraints.c
+++ b/gcc/lra-constraints.c
@@ -5300,7 +5300,7 @@ lra_inheritance (void)
   bitmap_initialize (&live_regs, &reg_obstack);
   bitmap_initialize (&temp_bitmap, &reg_obstack);
   bitmap_initialize (&ebb_global_regs, &reg_obstack);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       start_bb = bb;
       if (lra_dump_file != NULL)
@@ -5401,7 +5401,7 @@ remove_inheritance_pseudos (bitmap remove_pseudos)
      because we need to marks insns affected by previous
      inheritance/split pass for processing by the subsequent
      constraint pass.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       fix_bb_live_info (df_get_live_in (bb), remove_pseudos);
       fix_bb_live_info (df_get_live_out (bb), remove_pseudos);
diff --git a/gcc/lra-eliminations.c b/gcc/lra-eliminations.c
index 915e3a0..6c52bb3 100644
--- a/gcc/lra-eliminations.c
+++ b/gcc/lra-eliminations.c
@@ -1284,7 +1284,7 @@ init_elimination (void)
   struct elim_table *ep;
 
   init_elim_table ();
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       curr_sp_change = 0;
       stop_to_sp_elimination_p = false;
diff --git a/gcc/lra-spills.c b/gcc/lra-spills.c
index 6bebb92..1e5f52b 100644
--- a/gcc/lra-spills.c
+++ b/gcc/lra-spills.c
@@ -280,7 +280,7 @@ assign_spill_hard_regs (int *pseudo_regnos, int n)
 	  add_to_hard_reg_set (&reserved_hard_regs[p],
 			       lra_reg_info[i].biggest_mode, hard_regno);
   bitmap_initialize (&ok_insn_bitmap, &reg_obstack);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       if (DEBUG_INSN_P (insn)
 	  || ((set = single_set (insn)) != NULL_RTX
@@ -478,7 +478,7 @@ spill_pseudos (void)
 	  bitmap_ior_into (&changed_insns, &lra_reg_info[i].insn_bitmap);
 	}
     }
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_BB_INSNS (bb, insn)
 	if (bitmap_bit_p (&changed_insns, INSN_UID (insn)))
@@ -686,7 +686,7 @@ lra_final_code_change (void)
     if (lra_reg_info[i].nrefs != 0
 	&& (hard_regno = lra_get_regno_hard_regno (i)) >= 0)
       SET_REGNO (regno_reg_rtx[i], hard_regno);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS_SAFE (bb, insn, curr)
       if (INSN_P (insn))
 	{
diff --git a/gcc/lra.c b/gcc/lra.c
index 50a0786..21b8af1 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -1960,7 +1960,7 @@ remove_scratches (void)
   scratches.create (get_max_uid ());
   bitmap_initialize (&scratch_bitmap, &reg_obstack);
   bitmap_initialize (&scratch_operand_bitmap, &reg_obstack);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
     if (INSN_P (insn))
       {
@@ -2049,7 +2049,7 @@ check_rtl (bool final_p)
   rtx insn;
 
   lra_assert (! final_p || reload_completed);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
     if (NONDEBUG_INSN_P (insn)
 	&& GET_CODE (PATTERN (insn)) != USE
@@ -2090,7 +2090,7 @@ has_nonexceptional_receiver (void)
   /* First determine which blocks can reach exit via normal paths.  */
   tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bb->flags &= ~BB_REACHABLE;
 
   /* Place the exit block on our worklist.  */
@@ -2165,7 +2165,7 @@ update_inc_notes (void)
   basic_block bb;
   rtx insn;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
     if (NONDEBUG_INSN_P (insn))
       {
diff --git a/gcc/mcf.c b/gcc/mcf.c
index e709f2a..f9b5505 100644
--- a/gcc/mcf.c
+++ b/gcc/mcf.c
@@ -1281,7 +1281,7 @@ adjust_cfg_counts (fixup_graph_type *fixup_graph)
     {
       fprintf (dump_file, "\nCheck %s() CFG flow conservation:\n",
 	       current_function_name ());
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
         {
           if ((bb->count != sum_edge_counts (bb->preds))
                || (bb->count != sum_edge_counts (bb->succs)))
diff --git a/gcc/mode-switching.c b/gcc/mode-switching.c
index a9e5069..4e31d68 100644
--- a/gcc/mode-switching.c
+++ b/gcc/mode-switching.c
@@ -516,7 +516,7 @@ optimize_mode_switching (void)
       /* Determine what the first use (if any) need for a mode of entity E is.
 	 This will be the mode that is anticipatable for this block.
 	 Also compute the initial transparency settings.  */
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  struct seginfo *ptr;
 	  int last_mode = no_mode;
@@ -624,7 +624,7 @@ optimize_mode_switching (void)
 	  int m = current_mode[j] = MODE_PRIORITY_TO_MODE (entity_map[j], i);
 	  struct bb_info *info = bb_info[j];
 
-	  FOR_EACH_BB (bb)
+	  FOR_EACH_BB_FN (bb, cfun)
 	    {
 	      if (info[bb->index].seginfo->mode == m)
 		bitmap_set_bit (antic[bb->index], j);
@@ -637,7 +637,7 @@ optimize_mode_switching (void)
       /* Calculate the optimal locations for the
 	 placement mode switches to modes with priority I.  */
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	bitmap_not (kill[bb->index], transp[bb->index]);
       edge_list = pre_edge_lcm (n_entities, transp, comp, antic,
 				kill, &insert, &del);
diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
index f313044..ba8d020 100644
--- a/gcc/modulo-sched.c
+++ b/gcc/modulo-sched.c
@@ -3343,7 +3343,7 @@ rest_of_handle_sms (void)
   max_regno = max_reg_num ();
 
   /* Finalize layout changes.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
       bb->aux = bb->next_bb;
   free_dominance_info (CDI_DOMINATORS);
diff --git a/gcc/omp-low.c b/gcc/omp-low.c
index c929157..05fca40 100644
--- a/gcc/omp-low.c
+++ b/gcc/omp-low.c
@@ -4545,7 +4545,7 @@ optimize_omp_library_calls (gimple entry_stmt)
 		      && find_omp_clause (gimple_omp_task_clauses (entry_stmt),
 					  OMP_CLAUSE_UNTIED) != NULL);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       {
 	gimple call = gsi_stmt (gsi);
@@ -4849,7 +4849,7 @@ expand_omp_taskreg (struct omp_region *region)
 	  basic_block bb;
 	  bool changed = false;
 
-	  FOR_EACH_BB (bb)
+	  FOR_EACH_BB_FN (bb, cfun)
 	    changed |= gimple_purge_dead_eh_edges (bb);
 	  if (changed)
 	    cleanup_tree_cfg ();
@@ -7939,7 +7939,7 @@ expand_omp_target (struct omp_region *region)
 	  basic_block bb;
 	  bool changed = false;
 
-	  FOR_EACH_BB (bb)
+	  FOR_EACH_BB_FN (bb, cfun)
 	    changed |= gimple_purge_dead_eh_edges (bb);
 	  if (changed)
 	    cleanup_tree_cfg ();
diff --git a/gcc/postreload-gcse.c b/gcc/postreload-gcse.c
index 9ce17e5..a1204f9 100644
--- a/gcc/postreload-gcse.c
+++ b/gcc/postreload-gcse.c
@@ -266,7 +266,7 @@ alloc_mem (void)
   /* Find the largest UID and create a mapping from UIDs to CUIDs.  */
   uid_cuid = XCNEWVEC (int, get_max_uid () + 1);
   i = 1;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       {
         if (INSN_P (insn))
@@ -828,7 +828,7 @@ compute_hash_table (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
 
diff --git a/gcc/postreload.c b/gcc/postreload.c
index b0c6342..bfa5a38 100644
--- a/gcc/postreload.c
+++ b/gcc/postreload.c
@@ -213,7 +213,7 @@ reload_cse_regs_1 (void)
   cselib_init (CSELIB_RECORD_MEMORY);
   init_alias_analysis ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       {
 	if (INSN_P (insn))
diff --git a/gcc/predict.c b/gcc/predict.c
index 6bb1b2c..78efb72 100644
--- a/gcc/predict.c
+++ b/gcc/predict.c
@@ -1955,7 +1955,7 @@ strip_predict_hints (void)
   gimple ass_stmt;
   tree var;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator bi;
       for (bi = gsi_start_bb (bb); !gsi_end_p (bi);)
@@ -2226,7 +2226,7 @@ tree_bb_level_predictions (void)
 
   apply_return_prediction ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
@@ -2400,10 +2400,10 @@ tree_estimate_probability (void)
   if (number_of_loops (cfun) > 1)
     predict_loops ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     tree_estimate_probability_bb (bb);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     combine_predictions_for_bb (bb);
 
 #ifdef ENABLE_CHECKING
@@ -2928,7 +2928,7 @@ expensive_function_p (int threshold)
 
   /* Maximally BB_FREQ_MAX^2 so overflow won't happen.  */
   limit = ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency * threshold;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
 
@@ -2997,7 +2997,7 @@ estimate_bb_frequencies (bool force)
       estimate_loops ();
 
       memcpy (&freq_max, &real_zero, sizeof (real_zero));
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	if (sreal_compare (&freq_max, &BLOCK_INFO (bb)->frequency) < 0)
 	  memcpy (&freq_max, &BLOCK_INFO (bb)->frequency, sizeof (freq_max));
 
@@ -3055,7 +3055,7 @@ compute_function_frequency (void)
      functions to unlikely and that is most of what we care about.  */
   if (!cfun->after_inlining)
     node->frequency = NODE_FREQUENCY_UNLIKELY_EXECUTED;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (maybe_hot_bb_p (cfun, bb))
 	{
diff --git a/gcc/profile.c b/gcc/profile.c
index 24c16aa..62b126c 100644
--- a/gcc/profile.c
+++ b/gcc/profile.c
@@ -354,7 +354,7 @@ is_inconsistent (void)
 {
   basic_block bb;
   bool inconsistent = false;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       inconsistent |= is_edge_inconsistent (bb->preds);
       if (!dump_file && inconsistent)
@@ -692,7 +692,7 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
 
   /* If the graph has been correctly solved, every block will have a
      succ and pred count of zero.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gcc_assert (!BB_INFO (bb)->succ_count && !BB_INFO (bb)->pred_count);
     }
@@ -1021,7 +1021,7 @@ branch_prob (void)
      We also add fake exit edges for each call and asm statement in the
      basic, since it may not return.  */
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       int need_exit_edge = 0, need_entry_edge = 0;
       int have_exit_edge = 0, have_entry_edge = 0;
@@ -1260,7 +1260,7 @@ branch_prob (void)
       /* Initialize the output.  */
       output_location (NULL, 0, NULL, NULL);
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  gimple_stmt_iterator gsi;
 	  gcov_position_t offset = 0;
diff --git a/gcc/ree.c b/gcc/ree.c
index 87427fd..9938e98 100644
--- a/gcc/ree.c
+++ b/gcc/ree.c
@@ -835,7 +835,7 @@ find_removable_extensions (void)
   rtx insn, set;
   unsigned *def_map = XCNEWVEC (unsigned, max_insn_uid);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       {
 	if (!NONDEBUG_INSN_P (insn))
diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c
index 6aad466..87b9821 100644
--- a/gcc/reg-stack.c
+++ b/gcc/reg-stack.c
@@ -2846,7 +2846,7 @@ compensate_edges (void)
 
   starting_stack_p = false;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb != ENTRY_BLOCK_PTR_FOR_FN (cfun))
       {
         edge e;
@@ -3153,7 +3153,7 @@ convert_regs (void)
 
   /* ??? Process all unreachable blocks.  Though there's no excuse
      for keeping these even when not optimizing.  */
-  FOR_EACH_BB (b)
+  FOR_EACH_BB_FN (b, cfun)
     {
       block_info bi = BLOCK_INFO (b);
 
@@ -3212,7 +3212,7 @@ reg_to_stack (void)
 
   /* Set up block info for each basic block.  */
   alloc_aux_for_blocks (sizeof (struct block_info_def));
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       block_info bi = BLOCK_INFO (bb);
       edge_iterator ei;
diff --git a/gcc/regcprop.c b/gcc/regcprop.c
index 0438875..3c9ef3d 100644
--- a/gcc/regcprop.c
+++ b/gcc/regcprop.c
@@ -1076,7 +1076,7 @@ copyprop_hardreg_forward (void)
       = create_alloc_pool ("debug insn changes pool",
 			   sizeof (struct queued_debug_insn_change), 256);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bitmap_set_bit (visited, bb->index);
 
@@ -1112,7 +1112,7 @@ copyprop_hardreg_forward (void)
 
   if (MAY_HAVE_DEBUG_INSNS)
     {
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	if (bitmap_bit_p (visited, bb->index)
 	    && all_vd[bb->index].n_debug_insn_changes)
 	  {
diff --git a/gcc/reginfo.c b/gcc/reginfo.c
index db66a09..46288eb 100644
--- a/gcc/reginfo.c
+++ b/gcc/reginfo.c
@@ -1266,7 +1266,7 @@ init_subregs_of_mode (void)
   bitmap_obstack_initialize (&srom_obstack);
   subregs_of_mode = BITMAP_ALLOC (&srom_obstack);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
       if (NONDEBUG_INSN_P (insn))
         find_subregs_of_mode (PATTERN (insn), subregs_of_mode);
diff --git a/gcc/regrename.c b/gcc/regrename.c
index 3c242fb..9ff94d0 100644
--- a/gcc/regrename.c
+++ b/gcc/regrename.c
@@ -674,7 +674,7 @@ regrename_analyze (bitmap bb_mask)
   /* Gather some information about the blocks in this function.  */
   rename_info = XCNEWVEC (struct bb_rename_info, n_basic_blocks_for_fn (cfun));
   i = 0;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       struct bb_rename_info *ri = rename_info + i;
       ri->bb = bb;
@@ -778,7 +778,7 @@ regrename_analyze (bitmap bb_mask)
      We perform the analysis for both incoming and outgoing edges, but we
      only need to merge once (in the second part, after verifying outgoing
      edges).  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       struct bb_rename_info *bb_ri = (struct bb_rename_info *) bb->aux;
       unsigned j;
@@ -843,7 +843,7 @@ regrename_analyze (bitmap bb_mask)
 	    }
 	}
     }
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       struct bb_rename_info *bb_ri = (struct bb_rename_info *) bb->aux;
       unsigned j;
@@ -920,7 +920,7 @@ regrename_analyze (bitmap bb_mask)
 
   free (rename_info);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bb->aux = NULL;
 }
 
diff --git a/gcc/regstat.c b/gcc/regstat.c
index 48d27c3..6a191d8 100644
--- a/gcc/regstat.c
+++ b/gcc/regstat.c
@@ -375,7 +375,7 @@ regstat_compute_ri (void)
   reg_info_p = XCNEWVEC (struct reg_info_t, max_regno);
   local_live_last_luid = XNEWVEC (int, max_regno);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       regstat_bb_compute_ri (bb->index, live, artificial_uses,
 			     local_live, local_processed,
@@ -522,7 +522,7 @@ regstat_compute_calls_crossed (void)
   reg_info_p_size = max_regno;
   reg_info_p = XCNEWVEC (struct reg_info_t, max_regno);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       regstat_bb_compute_calls_crossed (bb->index, live);
     }
diff --git a/gcc/reload1.c b/gcc/reload1.c
index 15c6db5..47439ce 100644
--- a/gcc/reload1.c
+++ b/gcc/reload1.c
@@ -613,7 +613,7 @@ has_nonexceptional_receiver (void)
   /* First determine which blocks can reach exit via normal paths.  */
   tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bb->flags &= ~BB_REACHABLE;
 
   /* Place the exit block on our worklist.  */
@@ -641,7 +641,7 @@ has_nonexceptional_receiver (void)
 
   /* Now see if there's a reachable block with an exceptional incoming
      edge.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb->flags & BB_REACHABLE && bb_has_abnormal_pred (bb))
       return true;
 
@@ -1048,7 +1048,7 @@ reload (rtx first, int global)
      pseudo.  */
 
   if (! frame_pointer_needed)
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       bitmap_clear_bit (df_get_live_in (bb), HARD_FRAME_POINTER_REGNUM);
 
   /* Come here (with failure set nonzero) if we can't get enough spill
@@ -1592,7 +1592,7 @@ calculate_elim_costs_all_insns (void)
   set_initial_elim_offsets ();
   set_initial_label_offsets ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       elim_bb = bb;
diff --git a/gcc/resource.c b/gcc/resource.c
index 861d969..442c852 100644
--- a/gcc/resource.c
+++ b/gcc/resource.c
@@ -1219,7 +1219,7 @@ init_resource_info (rtx epilogue_insn)
   bb_ticks = XCNEWVEC (int, last_basic_block_for_fn (cfun));
 
   /* Set the BLOCK_FOR_INSN of each label that starts a basic block.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (LABEL_P (BB_HEAD (bb)))
       BLOCK_FOR_INSN (BB_HEAD (bb)) = bb;
 }
@@ -1258,7 +1258,7 @@ free_resource_info (void)
       bb_ticks = NULL;
     }
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (LABEL_P (BB_HEAD (bb)))
       BLOCK_FOR_INSN (BB_HEAD (bb)) = NULL;
 }
diff --git a/gcc/sched-ebb.c b/gcc/sched-ebb.c
index 73af0a7..d4baec5 100644
--- a/gcc/sched-ebb.c
+++ b/gcc/sched-ebb.c
@@ -637,7 +637,7 @@ schedule_ebbs (void)
   schedule_ebbs_init ();
 
   /* Schedule every region in the subroutine.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx head = BB_HEAD (bb);
 
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index a85ee5b..7fa9759 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -272,7 +272,7 @@ is_cfg_nonregular (void)
 
   /* If we have insns which refer to labels as non-jumped-to operands,
      then we consider the cfg not well structured.  */
-  FOR_EACH_BB (b)
+  FOR_EACH_BB_FN (b, cfun)
     FOR_BB_INSNS (b, insn)
       {
 	rtx note, next, set, dest;
@@ -317,7 +317,7 @@ is_cfg_nonregular (void)
      Unreachable loops with a single block are detected here.  This
      test is redundant with the one in find_rgns, but it's much
      cheaper to go ahead and catch the trivial case here.  */
-  FOR_EACH_BB (b)
+  FOR_EACH_BB_FN (b, cfun)
     {
       if (EDGE_COUNT (b->preds) == 0
 	  || (single_pred_p (b)
@@ -479,7 +479,7 @@ find_single_block_region (bool ebbs_p)
       probability_cutoff = PARAM_VALUE (TRACER_MIN_BRANCH_PROBABILITY);
     probability_cutoff = REG_BR_PROB_BASE / 100 * probability_cutoff;
 
-    FOR_EACH_BB (ebb_start)
+    FOR_EACH_BB_FN (ebb_start, cfun)
       {
         RGN_NR_BLOCKS (nr_regions) = 0;
         RGN_BLOCKS (nr_regions) = i;
@@ -512,7 +512,7 @@ find_single_block_region (bool ebbs_p)
       }
   }
   else
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       {
         rgn_bb_table[nr_regions] = bb->index;
         RGN_NR_BLOCKS (nr_regions) = 1;
@@ -762,7 +762,7 @@ haifa_find_rgns (void)
      the entry node by placing a nonzero value in dfs_nr.  Thus if
      dfs_nr is zero for any block, then it must be unreachable.  */
   unreachable = 0;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (dfs_nr[bb->index] == 0)
       {
 	unreachable = 1;
@@ -773,7 +773,7 @@ haifa_find_rgns (void)
      to hold degree counts.  */
   degree = dfs_nr;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     degree[bb->index] = EDGE_COUNT (bb->preds);
 
   /* Do not perform region scheduling if there are any unreachable
@@ -807,7 +807,7 @@ haifa_find_rgns (void)
 
       /* Find blocks which are inner loop headers.  We still have non-reducible
 	 loops to consider at this point.  */
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  if (bitmap_bit_p (header, bb->index) && bitmap_bit_p (inner, bb->index))
 	    {
@@ -826,7 +826,7 @@ haifa_find_rgns (void)
 		 If there exists a block that is not dominated by the loop
 		 header, then the block is reachable from outside the loop
 		 and thus the loop is not a natural loop.  */
-	      FOR_EACH_BB (jbb)
+	      FOR_EACH_BB_FN (jbb, cfun)
 		{
 		  /* First identify blocks in the loop, except for the loop
 		     entry block.  */
@@ -874,7 +874,7 @@ haifa_find_rgns (void)
 		 Place those blocks into the queue.  */
 	      if (no_loops)
 		{
-		  FOR_EACH_BB (jbb)
+		  FOR_EACH_BB_FN (jbb, cfun)
 		    /* Leaf nodes have only a single successor which must
 		       be EXIT_BLOCK.  */
 		    if (single_succ_p (jbb)
@@ -1052,7 +1052,7 @@ haifa_find_rgns (void)
 
   /* Any block that did not end up in a region is placed into a region
      by itself.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (degree[bb->index] >= 0)
       {
 	rgn_bb_table[idx] = bb->index;
@@ -3281,7 +3281,7 @@ sched_rgn_local_init (int rgn)
 
       /* Use ->aux to implement EDGE_TO_BIT mapping.  */
       rgn_nr_edges = 0;
-      FOR_EACH_BB (block)
+      FOR_EACH_BB_FN (block, cfun)
 	{
 	  if (CONTAINING_RGN (block->index) != rgn)
 	    continue;
@@ -3291,7 +3291,7 @@ sched_rgn_local_init (int rgn)
 
       rgn_edges = XNEWVEC (edge, rgn_nr_edges);
       rgn_nr_edges = 0;
-      FOR_EACH_BB (block)
+      FOR_EACH_BB_FN (block, cfun)
 	{
 	  if (CONTAINING_RGN (block->index) != rgn)
 	    continue;
@@ -3312,7 +3312,7 @@ sched_rgn_local_init (int rgn)
       /* Cleanup ->aux used for EDGE_TO_BIT mapping.  */
       /* We don't need them anymore.  But we want to avoid duplication of
 	 aux fields in the newly created edges.  */
-      FOR_EACH_BB (block)
+      FOR_EACH_BB_FN (block, cfun)
 	{
 	  if (CONTAINING_RGN (block->index) != rgn)
 	    continue;
diff --git a/gcc/sel-sched-dump.c b/gcc/sel-sched-dump.c
index 347b5eb..2e46770 100644
--- a/gcc/sel-sched-dump.c
+++ b/gcc/sel-sched-dump.c
@@ -750,7 +750,7 @@ sel_dump_cfg_2 (FILE *f, int flags)
   if (flags & SEL_DUMP_CFG_FUNCTION_NAME)
     fprintf (f, "function [label = \"%s\"];\n", current_function_name ());
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       insn_t insn = BB_HEAD (bb);
       insn_t next_tail = NEXT_INSN (BB_END (bb));
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index f7cc9ec..942d909 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -4321,7 +4321,7 @@ init_lv_sets (void)
   basic_block bb;
 
   /* Initialize of LV sets.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     init_lv_set (bb);
 
   /* Don't forget EXIT_BLOCK.  */
@@ -4349,7 +4349,7 @@ free_lv_sets (void)
   free_lv_set (EXIT_BLOCK_PTR_FOR_FN (cfun));
 
   /* Free LV sets.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (BB_LV_SET (bb))
       free_lv_set (bb);
 }
@@ -6155,7 +6155,7 @@ make_regions_from_the_rest (void)
   for (i = 0; i < last_basic_block_for_fn (cfun); i++)
     loop_hdr[i] = -1;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (bb->loop_father && !bb->loop_father->num == 0
 	  && !(bb->flags & BB_IRREDUCIBLE_LOOP))
@@ -6165,7 +6165,7 @@ make_regions_from_the_rest (void)
   /* For each basic block degree is calculated as the number of incoming
      edges, that are going out of bbs that are not yet scheduled.
      The basic blocks that are scheduled have degree value of zero.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       degree[bb->index] = 0;
 
@@ -6183,7 +6183,7 @@ make_regions_from_the_rest (void)
 
   /* Any block that did not end up in a region is placed into a region
      by itself.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (degree[bb->index] >= 0)
       {
 	rgn_bb_table[cur_rgn_blocks] = bb->index;
diff --git a/gcc/sese.c b/gcc/sese.c
index 7e59ac8..5e47ef7 100644
--- a/gcc/sese.c
+++ b/gcc/sese.c
@@ -156,7 +156,7 @@ build_sese_loop_nests (sese region)
   basic_block bb;
   struct loop *loop0, *loop1;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb_in_sese_p (bb, region))
       {
 	struct loop *loop = bb->loop_father;
@@ -303,10 +303,10 @@ sese_build_liveouts (sese region, bitmap liveouts)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     sese_build_liveouts_bb (region, liveouts, bb);
   if (MAY_HAVE_DEBUG_STMTS)
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       sese_reset_debug_liveouts_bb (region, liveouts, bb);
 }
 
diff --git a/gcc/stack-ptr-mod.c b/gcc/stack-ptr-mod.c
index 68ccd16..acca801 100644
--- a/gcc/stack-ptr-mod.c
+++ b/gcc/stack-ptr-mod.c
@@ -58,7 +58,7 @@ notice_stack_pointer_modification (void)
      been used.  */
   crtl->sp_is_unchanging = !cfun->calls_alloca;
   if (crtl->sp_is_unchanging)
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       FOR_BB_INSNS (bb, insn)
         {
 	  if (INSN_P (insn))
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index 808b0a7..57c991a 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -656,7 +656,7 @@ compute_store_table (void)
   already_set = XNEWVEC (int, max_gcse_regno);
 
   /* Find all the stores we care about.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* First compute the registers set in this block.  */
       FOR_BB_INSNS (bb, insn)
@@ -1061,7 +1061,7 @@ build_store_vectors (void)
   bitmap_vector_clear (st_transp, last_basic_block_for_fn (cfun));
   regs_set_in_block = XNEWVEC (int, max_gcse_regno);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       memset (regs_set_in_block, 0, sizeof (int) * max_gcse_regno);
 
@@ -1188,7 +1188,7 @@ one_store_motion_pass (void)
 
       /* Now we want to insert the new stores which are going to be needed.  */
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	if (bitmap_bit_p (st_delete_map[bb->index], ptr->index))
 	  {
 	    delete_store (ptr, bb);
diff --git a/gcc/testsuite/g++.dg/plugin/selfassign.c b/gcc/testsuite/g++.dg/plugin/selfassign.c
index be5a204..041f25d 100644
--- a/gcc/testsuite/g++.dg/plugin/selfassign.c
+++ b/gcc/testsuite/g++.dg/plugin/selfassign.c
@@ -261,7 +261,7 @@ execute_warn_self_assign (void)
   gimple_stmt_iterator gsi;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
         warn_self_assign (gsi_stmt (gsi));
diff --git a/gcc/testsuite/gcc.dg/plugin/selfassign.c b/gcc/testsuite/gcc.dg/plugin/selfassign.c
index be5a204..041f25d 100644
--- a/gcc/testsuite/gcc.dg/plugin/selfassign.c
+++ b/gcc/testsuite/gcc.dg/plugin/selfassign.c
@@ -261,7 +261,7 @@ execute_warn_self_assign (void)
   gimple_stmt_iterator gsi;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
         warn_self_assign (gsi_stmt (gsi));
diff --git a/gcc/tracer.c b/gcc/tracer.c
index de6877a..a40cbeb 100644
--- a/gcc/tracer.c
+++ b/gcc/tracer.c
@@ -256,7 +256,7 @@ tail_duplicate (void)
   branch_ratio_cutoff =
     (REG_BR_PROB_BASE / 100 * PARAM_VALUE (TRACER_MIN_BRANCH_RATIO));
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       int n = count_insns (bb);
       if (!ignore_bb_p (bb))
diff --git a/gcc/trans-mem.c b/gcc/trans-mem.c
index 2a6597d..c9af680 100644
--- a/gcc/trans-mem.c
+++ b/gcc/trans-mem.c
@@ -2656,7 +2656,7 @@ compute_transaction_bits (void)
      certainly don't need it to calculate CDI_DOMINATOR info.  */
   gate_tm_init ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bb->flags &= ~BB_IN_TRANSACTION;
 
   for (region = all_tm_regions; region; region = region->next)
diff --git a/gcc/tree-call-cdce.c b/gcc/tree-call-cdce.c
index 19402e3..32d0d5a 100644
--- a/gcc/tree-call-cdce.c
+++ b/gcc/tree-call-cdce.c
@@ -876,7 +876,7 @@ tree_call_cdce (void)
   gimple_stmt_iterator i;
   bool something_changed = false;
   auto_vec<gimple> cond_dead_built_in_calls;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* Collect dead call candidates.  */
       for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (&i))
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index ec365b5..98434ac 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -302,7 +302,7 @@ replace_loop_annotate ()
     }
 
   /* Remove IFN_ANNOTATE. Safeguard for the case loop->latch == NULL.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gsi = gsi_last_bb (bb);
       stmt = gsi_stmt (gsi);
@@ -456,7 +456,7 @@ factor_computed_gotos (void)
      Examine the last statement in each basic block to see if the block
      ends with a computed goto.  */
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi = gsi_last_bb (bb);
       gimple last;
@@ -635,7 +635,7 @@ fold_cond_expr_cond (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple stmt = last_stmt (bb);
 
@@ -682,7 +682,7 @@ make_edges (void)
 	     EDGE_FALLTHRU);
 
   /* Traverse the basic block array placing edges.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple last = last_stmt (bb);
       bool fallthru;
@@ -836,7 +836,7 @@ assign_discriminators (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge e;
       edge_iterator ei;
@@ -1055,7 +1055,7 @@ make_abnormal_goto_edges (basic_block bb, bool for_call)
   basic_block target_bb;
   gimple_stmt_iterator gsi;
 
-  FOR_EACH_BB (target_bb)
+  FOR_EACH_BB_FN (target_bb, cfun)
     {
       for (gsi = gsi_start_bb (target_bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
@@ -1235,7 +1235,7 @@ cleanup_dead_labels (void)
 
   /* Find a suitable label for each block.  We use the first user-defined
      label if there is one, or otherwise just the first label we see.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
 
@@ -1271,7 +1271,7 @@ cleanup_dead_labels (void)
 
   /* Now redirect all jumps/branches to the selected label.
      First do so for each block ending in a control statement.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple stmt = last_stmt (bb);
       tree label, new_label;
@@ -1363,7 +1363,7 @@ cleanup_dead_labels (void)
   /* Finally, purge dead labels.  All user-defined labels and labels that
      can be the target of non-local gotos and labels which have their
      address taken are preserved.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
       tree label_for_this_bb = label_for_bb[bb->index].label;
@@ -1487,7 +1487,7 @@ group_case_labels (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple stmt = last_stmt (bb);
       if (stmt && gimple_code (stmt) == GIMPLE_SWITCH)
@@ -2160,7 +2160,7 @@ dump_cfg_stats (FILE *file)
 	   SCALE (size), LABEL (size));
 
   num_edges = 0;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     num_edges += EDGE_COUNT (bb->succs);
   size = num_edges * sizeof (struct edge_def);
   total += size;
@@ -4894,7 +4894,7 @@ gimple_verify_flow_info (void)
 	err = 1;
       }
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bool found_ctrl_stmt = false;
 
@@ -7241,7 +7241,7 @@ print_loop (FILE *file, struct loop *loop, int indent, int verbosity)
   if (verbosity >= 1)
     {
       fprintf (file, "%s{\n", s_indent);
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	if (bb->loop_father == loop)
 	  print_loops_bb (file, bb, indent, verbosity);
 
@@ -8331,7 +8331,7 @@ execute_fixup_cfg (void)
   FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
     e->count = apply_scale (e->count, count_scale);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bb->count = apply_scale (bb->count, count_scale);
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c
index 50b4a68..949b21d 100644
--- a/gcc/tree-cfgcleanup.c
+++ b/gcc/tree-cfgcleanup.c
@@ -640,7 +640,7 @@ cleanup_tree_cfg_1 (void)
      recording of edge to CASE_LABEL_EXPR.  */
   start_recording_case_labels ();
 
-  /* Start by iterating over all basic blocks.  We cannot use FOR_EACH_BB,
+  /* Start by iterating over all basic blocks.  We cannot use FOR_EACH_BB_FN,
      since the basic blocks may get removed.  */
   n = last_basic_block_for_fn (cfun);
   for (i = NUM_FIXED_BLOCKS; i < n; i++)
@@ -918,7 +918,7 @@ merge_phi_nodes (void)
   calculate_dominance_info (CDI_DOMINATORS);
 
   /* Find all PHI nodes that we may be able to merge.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       basic_block dest;
 
diff --git a/gcc/tree-complex.c b/gcc/tree-complex.c
index ff5ccab..8c9a3aa 100644
--- a/gcc/tree-complex.c
+++ b/gcc/tree-complex.c
@@ -207,7 +207,7 @@ init_dont_simulate_again (void)
   gimple phi;
   bool saw_a_complex_op = false;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
@@ -1637,7 +1637,7 @@ tree_lower_complex (void)
 
   /* ??? Ideally we'd traverse the blocks in breadth-first order.  */
   old_last_basic_block = last_basic_block_for_fn (cfun);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       if (bb->index >= old_last_basic_block)
 	continue;
diff --git a/gcc/tree-dfa.c b/gcc/tree-dfa.c
index 27d6a71..2d964d5 100644
--- a/gcc/tree-dfa.c
+++ b/gcc/tree-dfa.c
@@ -279,7 +279,7 @@ collect_dfa_stats (struct dfa_stats_d *dfa_stats_p ATTRIBUTE_UNUSED)
   memset ((void *)dfa_stats_p, 0, sizeof (struct dfa_stats_d));
 
   /* Walk all the statements in the function counting references.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator si;
 
@@ -741,7 +741,7 @@ dump_enumerated_decls (FILE *file, int flags)
 
   memset (&wi, '\0', sizeof (wi));
   wi.info = (void *) &decl_list;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
diff --git a/gcc/tree-eh.c b/gcc/tree-eh.c
index 85dc79f..467eb20 100644
--- a/gcc/tree-eh.c
+++ b/gcc/tree-eh.c
@@ -3304,7 +3304,7 @@ execute_lower_resx (void)
 
   mnt_map = pointer_map_create ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple last = last_stmt (bb);
       if (last && is_gimple_resx (last))
@@ -3710,7 +3710,7 @@ execute_lower_eh_dispatch (void)
 
   assign_filter_values ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple last = last_stmt (bb);
       if (last == NULL)
@@ -3810,7 +3810,7 @@ mark_reachable_handlers (sbitmap *r_reachablep, sbitmap *lp_reachablep)
   else
     lp_reachable = NULL;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
diff --git a/gcc/tree-emutls.c b/gcc/tree-emutls.c
index 9ba25fc..32599eb 100644
--- a/gcc/tree-emutls.c
+++ b/gcc/tree-emutls.c
@@ -638,7 +638,7 @@ lower_emutls_function_body (struct cgraph_node *node)
      create a node for it.  */
   d.builtin_node = cgraph_get_create_node (d.builtin_decl);
 
-  FOR_EACH_BB (d.bb)
+  FOR_EACH_BB_FN (d.bb, cfun)
     {
       gimple_stmt_iterator gsi;
       unsigned int i, nedge;
diff --git a/gcc/tree-if-conv.c b/gcc/tree-if-conv.c
index 7f6a150..71a25f1 100644
--- a/gcc/tree-if-conv.c
+++ b/gcc/tree-if-conv.c
@@ -1815,7 +1815,7 @@ main_tree_if_conversion (void)
 #ifdef ENABLE_CHECKING
   {
     basic_block bb;
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       gcc_assert (!bb->aux);
   }
 #endif
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index ed06cb9..ab8e40b 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -4569,7 +4569,7 @@ optimize_inline_calls (tree fn)
      will split id->current_basic_block, and the new blocks will
      follow it; we'll trudge through them, processing their CALL_EXPRs
      along the way.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     inlined_p |= gimple_expand_calls_inline (bb, &id);
 
   pop_gimplify_context (NULL);
diff --git a/gcc/tree-into-ssa.c b/gcc/tree-into-ssa.c
index b6d3dd7..8e539f2 100644
--- a/gcc/tree-into-ssa.c
+++ b/gcc/tree-into-ssa.c
@@ -2320,7 +2320,7 @@ rewrite_into_ssa (void)
 
   /* Initialize dominance frontier.  */
   dfs = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bitmap_initialize (&dfs[bb->index], &bitmap_default_obstack);
 
   /* 1- Compute dominance frontiers.  */
@@ -2337,7 +2337,7 @@ rewrite_into_ssa (void)
   rewrite_blocks (ENTRY_BLOCK_PTR_FOR_FN (cfun), REWRITE_ALL);
 
   /* Free allocated memory.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bitmap_clear (&dfs[bb->index]);
   free (dfs);
 
@@ -3270,7 +3270,7 @@ update_ssa (unsigned update_flags)
       /* If the caller requested PHI nodes to be added, compute
 	 dominance frontiers.  */
       dfs = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	bitmap_initialize (&dfs[bb->index], &bitmap_default_obstack);
       compute_dominance_frontiers (dfs);
 
@@ -3296,7 +3296,7 @@ update_ssa (unsigned update_flags)
 	insert_updated_phi_nodes_for (sym, dfs, blocks_to_update,
 	                              update_flags);
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	bitmap_clear (&dfs[bb->index]);
       free (dfs);
 
diff --git a/gcc/tree-nrv.c b/gcc/tree-nrv.c
index b42993d..e00463d 100644
--- a/gcc/tree-nrv.c
+++ b/gcc/tree-nrv.c
@@ -144,7 +144,7 @@ tree_nrv (void)
     return 0;
 
   /* Look through each block for assignments to the RESULT_DECL.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
@@ -238,7 +238,7 @@ tree_nrv (void)
      RESULT.  */
   data.var = found;
   data.result = result;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); )
 	{
@@ -358,7 +358,7 @@ execute_return_slot_opt (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
diff --git a/gcc/tree-object-size.c b/gcc/tree-object-size.c
index 6a587e1..c83345f 100644
--- a/gcc/tree-object-size.c
+++ b/gcc/tree-object-size.c
@@ -1211,7 +1211,7 @@ static unsigned int
 compute_object_sizes (void)
 {
   basic_block bb;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
       for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (&i))
diff --git a/gcc/tree-outof-ssa.c b/gcc/tree-outof-ssa.c
index 8df3026..c5bba789 100644
--- a/gcc/tree-outof-ssa.c
+++ b/gcc/tree-outof-ssa.c
@@ -835,7 +835,7 @@ eliminate_useless_phis (void)
   gimple_stmt_iterator gsi;
   tree result;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); )
         {
@@ -893,7 +893,7 @@ rewrite_trees (var_map map ATTRIBUTE_UNUSED)
   /* Search for PHIs where the destination has no partition, but one
      or more arguments has a partition.  This should not happen and can
      create incorrect code.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -1101,7 +1101,7 @@ insert_backedge_copies (void)
 
   mark_dfs_back_edges ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* Mark block as possibly needing calculation of UIDs.  */
       bb->aux = &bb->aux;
diff --git a/gcc/tree-profile.c b/gcc/tree-profile.c
index 537c246..51e997c 100644
--- a/gcc/tree-profile.c
+++ b/gcc/tree-profile.c
@@ -637,7 +637,7 @@ tree_profiling (void)
 
       push_cfun (DECL_STRUCT_FUNCTION (node->decl));
 
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  gimple_stmt_iterator gsi;
 	  for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
diff --git a/gcc/tree-scalar-evolution.c b/gcc/tree-scalar-evolution.c
index ada942d..59e44cb 100644
--- a/gcc/tree-scalar-evolution.c
+++ b/gcc/tree-scalar-evolution.c
@@ -3276,7 +3276,7 @@ scev_const_prop (void)
   if (number_of_loops (cfun) <= 1)
     return 0;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       loop = bb->loop_father;
 
diff --git a/gcc/tree-sra.c b/gcc/tree-sra.c
index 9aa526f..ebd4218 100644
--- a/gcc/tree-sra.c
+++ b/gcc/tree-sra.c
@@ -1252,7 +1252,7 @@ scan_function (void)
   basic_block bb;
   bool ret = false;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -3311,7 +3311,7 @@ sra_modify_function_body (void)
   bool cfg_changed = false;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi = gsi_start_bb (bb);
       while (!gsi_end_p (gsi))
@@ -3795,7 +3795,7 @@ propagate_dereference_distances (void)
 
   auto_vec<basic_block> queue (last_basic_block_for_fn (cfun));
   queue.quick_push (ENTRY_BLOCK_PTR_FOR_FN (cfun));
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       queue.quick_push (bb);
       bb->aux = bb;
@@ -4572,7 +4572,7 @@ ipa_sra_modify_function_body (ipa_parm_adjustment_vec adjustments)
   bool cfg_changed = false;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
@@ -4811,7 +4811,7 @@ convert_callers (struct cgraph_node *node, tree old_decl,
   if (!encountered_recursive_call)
     return;
 
-  FOR_EACH_BB (this_block)
+  FOR_EACH_BB_FN (this_block, cfun)
     {
       gimple_stmt_iterator gsi;
 
diff --git a/gcc/tree-ssa-ccp.c b/gcc/tree-ssa-ccp.c
index 3d05258..7e07771 100644
--- a/gcc/tree-ssa-ccp.c
+++ b/gcc/tree-ssa-ccp.c
@@ -774,7 +774,7 @@ ccp_initialize (void)
   const_val = XCNEWVEC (prop_value_t, n_const_val);
 
   /* Initialize simulation flags for PHI nodes and statements.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
 
@@ -808,7 +808,7 @@ ccp_initialize (void)
   /* Now process PHI nodes.  We never clear the simulate_again flag on
      phi nodes, since we do not know which edges are executable yet,
      except for phi nodes for virtual operands when we do not do store ccp.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
 
@@ -2508,7 +2508,7 @@ execute_fold_all_builtins (void)
   basic_block bb;
   unsigned int todoflags = 0;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
       for (i = gsi_start_bb (bb); !gsi_end_p (i); )
diff --git a/gcc/tree-ssa-coalesce.c b/gcc/tree-ssa-coalesce.c
index 70158d5..38a4078 100644
--- a/gcc/tree-ssa-coalesce.c
+++ b/gcc/tree-ssa-coalesce.c
@@ -821,7 +821,7 @@ build_ssa_conflict_graph (tree_live_info_p liveinfo)
 
   live = new_live_track (map);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
@@ -929,7 +929,7 @@ create_outofssa_var_map (coalesce_list_p cl, bitmap used_in_copy)
 
   map = init_var_map (num_ssa_names);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       tree arg;
 
@@ -1183,7 +1183,7 @@ coalesce_partitions (var_map map, ssa_conflicts_p graph, coalesce_list_p cl,
      in the coalesce list because they do not need to be sorted, and simply
      consume extra memory/compilation time in large programs.  */
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_EACH_EDGE (e, ei, bb->preds)
 	if (e->flags & EDGE_ABNORMAL)
diff --git a/gcc/tree-ssa-copy.c b/gcc/tree-ssa-copy.c
index 0dd5e14..3da262b 100644
--- a/gcc/tree-ssa-copy.c
+++ b/gcc/tree-ssa-copy.c
@@ -469,7 +469,7 @@ init_copy_prop (void)
   n_copy_of = num_ssa_names;
   copy_of = XCNEWVEC (prop_value_t, n_copy_of);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator si;
       int depth = bb_loop_depth (bb);
diff --git a/gcc/tree-ssa-copyrename.c b/gcc/tree-ssa-copyrename.c
index 90e070f..c7d514f 100644
--- a/gcc/tree-ssa-copyrename.c
+++ b/gcc/tree-ssa-copyrename.c
@@ -325,7 +325,7 @@ rename_ssa_copies (void)
 
   map = init_var_map (num_ssa_names);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* Scan for real copies.  */
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -341,7 +341,7 @@ rename_ssa_copies (void)
 	}
     }
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* Treat PHI nodes as copies between the result and each argument.  */
       for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
diff --git a/gcc/tree-ssa-dce.c b/gcc/tree-ssa-dce.c
index 701dd44..5abef5c 100644
--- a/gcc/tree-ssa-dce.c
+++ b/gcc/tree-ssa-dce.c
@@ -374,7 +374,7 @@ find_obviously_necessary_stmts (bool aggressive)
   gimple phi, stmt;
   int flags;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* PHI nodes are never inherently necessary.  */
       for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -404,7 +404,7 @@ find_obviously_necessary_stmts (bool aggressive)
       struct loop *loop;
       scev_initialize ();
       if (mark_irreducible_loops ())
-	FOR_EACH_BB (bb)
+	FOR_EACH_BB_FN (bb, cfun)
 	  {
 	    edge_iterator ei;
 	    FOR_EACH_EDGE (e, ei, bb->succs)
@@ -1325,7 +1325,7 @@ eliminate_unnecessary_stmts (void)
 	    }
 	}
     }
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* Remove dead PHI nodes.  */
       something_changed |= remove_dead_phis (bb);
diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c
index 6cf60be..2bd2a86 100644
--- a/gcc/tree-ssa-dom.c
+++ b/gcc/tree-ssa-dom.c
@@ -795,7 +795,7 @@ free_all_edge_infos (void)
   edge_iterator ei;
   edge e;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_EACH_EDGE (e, ei, bb->preds)
         {
@@ -866,7 +866,7 @@ tree_ssa_dominator_optimize (void)
   {
     gimple_stmt_iterator gsi;
     basic_block bb;
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       {
 	for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	  update_stmt_if_modified (gsi_stmt (gsi));
diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c
index 6e6d115..a77a639 100644
--- a/gcc/tree-ssa-forwprop.c
+++ b/gcc/tree-ssa-forwprop.c
@@ -3386,7 +3386,7 @@ ssa_forward_propagate_and_combine (void)
 
   cfg_changed = false;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
diff --git a/gcc/tree-ssa-live.c b/gcc/tree-ssa-live.c
index 6ccf2fb..da7198b 100644
--- a/gcc/tree-ssa-live.c
+++ b/gcc/tree-ssa-live.c
@@ -673,7 +673,7 @@ clear_unused_block_pointer (void)
   basic_block bb;
   gimple_stmt_iterator gsi;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       {
 	unsigned i;
@@ -791,7 +791,7 @@ remove_unused_locals (void)
   usedvars = BITMAP_ALLOC (NULL);
 
   /* Walk the CFG marking all referenced symbols.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       size_t i;
@@ -856,7 +856,7 @@ remove_unused_locals (void)
      ignores them, and the second pass (if there were any) tries to remove
      them.  */
   if (have_local_clobbers)
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       {
 	gimple_stmt_iterator gsi;
 
@@ -963,11 +963,11 @@ new_tree_live_info (var_map map)
   live->num_blocks = last_basic_block_for_fn (cfun);
 
   live->livein = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bitmap_initialize (&live->livein[bb->index], &liveness_bitmap_obstack);
 
   live->liveout = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bitmap_initialize (&live->liveout[bb->index], &liveness_bitmap_obstack);
 
   live->work_stack = XNEWVEC (int, last_basic_block_for_fn (cfun));
@@ -1149,11 +1149,11 @@ calculate_live_on_exit (tree_live_info_p liveinfo)
   edge_iterator ei;
 
   /* live on entry calculations used liveout vectors for defs, clear them.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     bitmap_clear (&liveinfo->liveout[bb->index]);
 
   /* Set all the live-on-exit bits for uses in PHIs.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       size_t i;
@@ -1294,7 +1294,7 @@ dump_live_info (FILE *f, tree_live_info_p live, int flag)
 
   if ((flag & LIVEDUMP_ENTRY) && live->livein)
     {
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  fprintf (f, "\nLive on entry to BB%d : ", bb->index);
 	  EXECUTE_IF_SET_IN_BITMAP (&live->livein[bb->index], 0, i, bi)
@@ -1308,7 +1308,7 @@ dump_live_info (FILE *f, tree_live_info_p live, int flag)
 
   if ((flag & LIVEDUMP_EXIT) && live->liveout)
     {
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	{
 	  fprintf (f, "\nLive on exit from BB%d : ", bb->index);
 	  EXECUTE_IF_SET_IN_BITMAP (&live->liveout[bb->index], 0, i, bi)
diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
index 3aaf2b2..cbcdc37 100644
--- a/gcc/tree-ssa-loop-im.c
+++ b/gcc/tree-ssa-loop-im.c
@@ -1601,7 +1601,7 @@ analyze_memory_references (void)
      loops postorder.  */
   i = 0;
   bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     if (bb->loop_father != current_loops->tree_root)
       bbs[i++] = bb;
   n = i;
@@ -2406,7 +2406,7 @@ fill_always_executed_in (void)
   struct loop *loop;
 
   bitmap_clear (contains_call);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -2478,7 +2478,7 @@ tree_ssa_lim_finalize (void)
 
   free_aux_for_edges ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     SET_ALWAYS_EXECUTED_IN (bb, NULL);
 
   bitmap_obstack_release (&lim_bitmap_obstack);
diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
index 76d5958..ed30c7b0 100644
--- a/gcc/tree-ssa-loop-manip.c
+++ b/gcc/tree-ssa-loop-manip.c
@@ -463,7 +463,7 @@ find_uses_to_rename (bitmap changed_bbs, bitmap *use_blocks, bitmap need_phis)
     EXECUTE_IF_SET_IN_BITMAP (changed_bbs, 0, index, bi)
       find_uses_to_rename_bb (BASIC_BLOCK_FOR_FN (cfun, index), use_blocks, need_phis);
   else
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       find_uses_to_rename_bb (bb, use_blocks, need_phis);
 }
 
@@ -602,7 +602,7 @@ verify_loop_closed_ssa (bool verify_ssa_p)
 
   timevar_push (TV_VERIFY_LOOP_CLOSED);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (bsi = gsi_start_phis (bb); !gsi_end_p (bsi); gsi_next (&bsi))
 	{
diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c
index f77c016..1c89f45 100644
--- a/gcc/tree-ssa-math-opts.c
+++ b/gcc/tree-ssa-math-opts.c
@@ -527,7 +527,7 @@ execute_cse_reciprocals (void)
   calculate_dominance_info (CDI_POST_DOMINATORS);
 
 #ifdef ENABLE_CHECKING
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     gcc_assert (!bb->aux);
 #endif
 
@@ -540,7 +540,7 @@ execute_cse_reciprocals (void)
 	  execute_cse_reciprocals_1 (NULL, name);
       }
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       gimple phi;
@@ -1419,7 +1419,7 @@ execute_cse_sincos (void)
   calculate_dominance_info (CDI_DOMINATORS);
   memset (&sincos_stats, 0, sizeof (sincos_stats));
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       bool cleanup_eh = false;
@@ -1939,7 +1939,7 @@ execute_optimize_bswap (void)
 
   memset (&bswap_stats, 0, sizeof (bswap_stats));
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
@@ -2785,7 +2785,7 @@ execute_optimize_widening_mul (void)
 
   memset (&widen_mul_stats, 0, sizeof (widen_mul_stats));
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
diff --git a/gcc/tree-ssa-propagate.c b/gcc/tree-ssa-propagate.c
index 55ae68b..f9f084b 100644
--- a/gcc/tree-ssa-propagate.c
+++ b/gcc/tree-ssa-propagate.c
@@ -1097,7 +1097,7 @@ substitute_and_fold (ssa_prop_get_value_fn get_value_fn,
       }
 
   /* Propagate into all uses and fold.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
 
diff --git a/gcc/tree-ssa-structalias.c b/gcc/tree-ssa-structalias.c
index 16679f4..9ec1512 100644
--- a/gcc/tree-ssa-structalias.c
+++ b/gcc/tree-ssa-structalias.c
@@ -6778,7 +6778,7 @@ compute_points_to_sets (void)
   intra_create_variable_infos ();
 
   /* Now walk all statements and build the constraint set.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
@@ -6825,7 +6825,7 @@ compute_points_to_sets (void)
     }
 
   /* Compute the call-used/clobbered sets.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
 
diff --git a/gcc/tree-ssa-tail-merge.c b/gcc/tree-ssa-tail-merge.c
index a0eac67..4e05246 100644
--- a/gcc/tree-ssa-tail-merge.c
+++ b/gcc/tree-ssa-tail-merge.c
@@ -754,7 +754,7 @@ find_same_succ (void)
   same_succ same = same_succ_alloc ();
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       find_same_succ_bb (bb, &same);
       if (same == NULL)
@@ -1015,7 +1015,7 @@ reset_cluster_vectors (void)
   for (i = 0; i < all_clusters.length (); ++i)
     delete_cluster (all_clusters[i]);
   all_clusters.truncate (0);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     BB_CLUSTER (bb) = NULL;
 }
 
diff --git a/gcc/tree-ssa-ter.c b/gcc/tree-ssa-ter.c
index fa6a248..22ae47b 100644
--- a/gcc/tree-ssa-ter.c
+++ b/gcc/tree-ssa-ter.c
@@ -683,7 +683,7 @@ find_replaceable_exprs (var_map map)
 
   bitmap_obstack_initialize (&ter_bitmap_obstack);
   table = new_temp_expr_table (map);
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       find_replaceable_in_bb (table, bb);
       gcc_checking_assert (bitmap_empty_p (table->partition_in_use));
diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
index 9289c11..6f978e2 100644
--- a/gcc/tree-ssa-threadupdate.c
+++ b/gcc/tree-ssa-threadupdate.c
@@ -1631,7 +1631,7 @@ thread_through_all_blocks (bool may_peel_loop_headers)
      ahead and thread it, else ignore it.  */
   basic_block bb;
   edge e;
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* If we do end up threading here, we can remove elements from
 	 BB->preds.  Thus we can not use the FOR_EACH_EDGE iterator.  */
diff --git a/gcc/tree-ssa-uncprop.c b/gcc/tree-ssa-uncprop.c
index d38e0dd..63a2e10 100644
--- a/gcc/tree-ssa-uncprop.c
+++ b/gcc/tree-ssa-uncprop.c
@@ -65,7 +65,7 @@ associate_equivalences_with_edges (void)
 
   /* Walk over each block.  If the block ends with a control statement,
      then it might create a useful equivalence.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi = gsi_last_bb (bb);
       gimple stmt;
@@ -406,7 +406,7 @@ tree_ssa_uncprop (void)
   /* we just need to empty elements out of the hash table, and cleanup the
     AUX field on the edges.  */
   val_ssa_equiv.dispose ();
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge e;
       edge_iterator ei;
diff --git a/gcc/tree-ssa-uninit.c b/gcc/tree-ssa-uninit.c
index 4fd5fb8..c6b0a90 100644
--- a/gcc/tree-ssa-uninit.c
+++ b/gcc/tree-ssa-uninit.c
@@ -176,7 +176,7 @@ warn_uninitialized_vars (bool warn_possibly_uninitialized)
   gimple_stmt_iterator gsi;
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       bool always_executed = dominated_by_p (CDI_POST_DOMINATORS,
 					     single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)), bb);
@@ -2130,7 +2130,7 @@ execute_late_warn_uninitialized (void)
   added_to_worklist = pointer_set_create ();
 
   /* Initialize worklist  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       {
         gimple phi = gsi_stmt (gsi);
diff --git a/gcc/tree-ssa.c b/gcc/tree-ssa.c
index f1025b2..8c1aaf2 100644
--- a/gcc/tree-ssa.c
+++ b/gcc/tree-ssa.c
@@ -999,7 +999,7 @@ verify_ssa (bool check_modified_stmt)
 
   /* Now verify all the uses and make sure they agree with the definitions
      found in the previous pass.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge e;
       gimple phi;
@@ -1456,7 +1456,7 @@ execute_update_addresses_taken (void)
 
   /* Collect into ADDRESSES_TAKEN all variables whose address is taken within
      the function body.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
@@ -1558,7 +1558,7 @@ execute_update_addresses_taken (void)
      variables and operands need to be rewritten to expose bare symbols.  */
   if (!bitmap_empty_p (suitable_for_renaming))
     {
-      FOR_EACH_BB (bb)
+      FOR_EACH_BB_FN (bb, cfun)
 	for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi);)
 	  {
 	    gimple stmt = gsi_stmt (gsi);
diff --git a/gcc/tree-stdarg.c b/gcc/tree-stdarg.c
index 8b168e0..dc82340 100644
--- a/gcc/tree-stdarg.c
+++ b/gcc/tree-stdarg.c
@@ -536,7 +536,7 @@ check_all_va_list_escapes (struct stdarg_info *si)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
 
@@ -703,7 +703,7 @@ execute_optimize_stdarg (void)
 			   || TREE_TYPE (cfun_va_list) == char_type_node);
   gcc_assert (is_gimple_reg_type (cfun_va_list) == va_list_simple_ptr);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
 
@@ -813,7 +813,7 @@ execute_optimize_stdarg (void)
   memset (&wi, 0, sizeof (wi));
   wi.info = si.va_list_vars;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
 
diff --git a/gcc/tree-switch-conversion.c b/gcc/tree-switch-conversion.c
index f6b17b8..efcc94d 100644
--- a/gcc/tree-switch-conversion.c
+++ b/gcc/tree-switch-conversion.c
@@ -1420,7 +1420,7 @@ do_switchconv (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
   {
     const char *failure_reason;
     gimple stmt = last_stmt (bb);
diff --git a/gcc/tree-vect-generic.c b/gcc/tree-vect-generic.c
index d55485d..098012c 100644
--- a/gcc/tree-vect-generic.c
+++ b/gcc/tree-vect-generic.c
@@ -1541,7 +1541,7 @@ expand_vector_operations (void)
   basic_block bb;
   bool cfg_changed = false;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
diff --git a/gcc/tree-vectorizer.c b/gcc/tree-vectorizer.c
index c11f8a8..e5d201f 100644
--- a/gcc/tree-vectorizer.c
+++ b/gcc/tree-vectorizer.c
@@ -157,7 +157,7 @@ adjust_simduid_builtins (hash_table <simduid_to_vf> &htab)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator i;
 
@@ -265,7 +265,7 @@ note_simd_array_uses (hash_table <simd_array_to_simduid> *htab)
   wi.info = &ns;
   ns.htab = htab;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       {
 	gimple stmt = gsi_stmt (gsi);
@@ -475,7 +475,7 @@ execute_vect_slp (void)
 
   init_stmt_vec_info_vec ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       vect_location = find_bb_location (bb);
 
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index 06b6259..8ab6d76 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -6431,7 +6431,7 @@ check_all_array_refs (void)
   basic_block bb;
   gimple_stmt_iterator si;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       edge_iterator ei;
       edge e;
@@ -6593,7 +6593,7 @@ remove_range_assertions (void)
   /* Note that the BSI iterator bump happens at the bottom of the
      loop and no bump is necessary if we're removing the statement
      referenced by the current BSI.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (si = gsi_after_labels (bb), is_unreachable = -1; !gsi_end_p (si);)
       {
 	gimple stmt = gsi_stmt (si);
@@ -6708,7 +6708,7 @@ vrp_initialize (void)
   vr_value = XCNEWVEC (value_range_t *, num_vr_values);
   vr_phi_edge_counts = XCNEWVEC (int, num_ssa_names);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator si;
 
@@ -9543,7 +9543,7 @@ identify_jump_threads (void)
      I doubt it's worth the effort for the classes of jump
      threading opportunities we are trying to identify at this
      point in compilation.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       gimple last;
 
diff --git a/gcc/tsan.c b/gcc/tsan.c
index 4efcfe5..d12459f 100644
--- a/gcc/tsan.c
+++ b/gcc/tsan.c
@@ -640,7 +640,7 @@ instrument_memory_accesses (void)
   gimple_stmt_iterator gsi;
   bool fentry_exit_instrument = false;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       fentry_exit_instrument |= instrument_gimple (&gsi);
   return fentry_exit_instrument;
diff --git a/gcc/ubsan.c b/gcc/ubsan.c
index 846e884..51b4f8d 100644
--- a/gcc/ubsan.c
+++ b/gcc/ubsan.c
@@ -741,7 +741,7 @@ ubsan_pass (void)
   basic_block bb;
   gimple_stmt_iterator gsi;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi);)
 	{
diff --git a/gcc/value-prof.c b/gcc/value-prof.c
index d509354..c684835 100644
--- a/gcc/value-prof.c
+++ b/gcc/value-prof.c
@@ -542,7 +542,7 @@ verify_histograms (void)
 
   error_found = false;
   visited_hists = pointer_set_create ();
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       {
 	gimple stmt = gsi_stmt (gsi);
@@ -648,7 +648,7 @@ gimple_value_profile_transformations (void)
   gimple_stmt_iterator gsi;
   bool changed = false;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
@@ -1944,7 +1944,7 @@ gimple_find_values_to_profile (histogram_values *values)
   histogram_value hist = NULL;
   values->create (0);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
       gimple_values_to_profile (gsi_stmt (gsi), values);
 
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 5bd0799..175ec01 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -6941,7 +6941,7 @@ vt_find_locations (void)
   in_pending = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (in_worklist);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     fibheap_insert (pending, bb_order[bb->index], bb);
   bitmap_ones (in_pending);
 
@@ -7101,7 +7101,7 @@ vt_find_locations (void)
     }
 
   if (success && MAY_HAVE_DEBUG_INSNS)
-    FOR_EACH_BB (bb)
+    FOR_EACH_BB_FN (bb, cfun)
       gcc_assert (VTI (bb)->flooded);
 
   free (bb_order);
@@ -7229,7 +7229,7 @@ dump_dataflow_sets (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       fprintf (dump_file, "\nBasic block %d:\n", bb->index);
       fprintf (dump_file, "IN:\n");
@@ -9402,7 +9402,7 @@ vt_emit_notes (void)
 
   /* Free memory occupied by the out hash tables, as they aren't used
      anymore.  */
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     dataflow_set_clear (&VTI (bb)->out);
 
   /* Enable emitting notes by functions (mainly by set_variable_part and
@@ -9418,7 +9418,7 @@ vt_emit_notes (void)
 
   dataflow_set_init (&cur);
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       /* Emit the notes for changes of variable locations between two
 	 subsequent basic blocks.  */
@@ -9995,7 +9995,7 @@ vt_initialize (void)
 
   vt_add_function_parameters ();
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       rtx insn;
       HOST_WIDE_INT pre, post = 0;
@@ -10138,7 +10138,7 @@ delete_debug_insns (void)
   if (!MAY_HAVE_DEBUG_INSNS)
     return;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       FOR_BB_INSNS_SAFE (bb, insn, next)
 	if (DEBUG_INSN_P (insn))
@@ -10181,7 +10181,7 @@ vt_finalize (void)
 {
   basic_block bb;
 
-  FOR_EACH_BB (bb)
+  FOR_EACH_BB_FN (bb, cfun)
     {
       VTI (bb)->mos.release ();
     }
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 10/13] Eliminate last_basic_block macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (9 preceding siblings ...)
  2013-12-06 15:08                     ` [PATCH 08/13] Eliminate label_to_block_map macro David Malcolm
@ 2013-12-06 15:09                     ` David Malcolm
  2013-12-06 15:58                       ` Steven Bosscher
  2013-12-06 15:12                     ` [PATCH 13/13] Eliminate FOR_ALL_BB macro David Malcolm
                                       ` (2 subsequent siblings)
  13 siblings, 1 reply; 42+ messages in thread
From: David Malcolm @ 2013-12-06 15:09 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

	* basic-block.h (last_basic_block): Eliminate macro.

	* asan.c (transform_statements): Eliminate use of last_basic_block
	in favor of last_basic_block_for_fn, in order to make use of cfun
	explicit.
	* bb-reorder.c (copy_bb, reorder_basic_blocks): Likewise.
	* bt-load.c (compute_defs_uses_and_gen, compute_kill, compute_out,
	link_btr_uses, build_btr_def_use_webs, migrate_btr_defs): Likewise.
	* cfg.c (compact_blocks): Likewise.
	* cfganal.c (mark_dfs_back_edges,
	control_dependences::control_dependences, post_order_compute,
	pre_and_rev_post_order_compute_fn, dfs_enumerate_from, compute_idf,
	single_pred_before_succ_order): Likewise.
	* cfgbuild.c (make_edges): Likewise.
	* cfgexpand.c (add_scope_conflicts, gimple_expand_cfg): Likewise.
	* cfghooks.c (verify_flow_info): Likewise.
	* cfgloop.c (verify_loop_structure): Likewise.
	* cfgloopanal.c (just_once_each_iteration_p,
	mark_irreducible_loops): Likewise.
	* cfgloopmanip.c (fix_bb_placements, remove_path,
	update_dominators_in_loop): Likewise.
	* cfgrtl.c (create_basic_block_structure, rtl_create_basic_block,
	break_superblocks, rtl_flow_call_edges_add): Likewise.
	* config/epiphany/resolve-sw-modes.c (resolve_sw_modes): Likewise.
	* config/frv/frv.c (frv_optimize_membar): Likewise.
	* config/mips/mips.c (r10k_insert_cache_barriers): Likewise.
	* config/spu/spu.c (spu_machine_dependent_reorg): Likewise.
	* cprop.c (compute_local_properties, find_implicit_sets,
	bypass_conditional_jumps, one_cprop_pass): Likewise.
	* cse.c (cse_main): Likewise.
	* df-core.c (rest_of_handle_df_initialize, df_worklist_dataflow,
	df_analyze, df_grow_bb_info, df_compact_blocks): Likewise.
	* df-problems.c (df_lr_verify_solution_start,
	df_live_verify_solution_start, df_md_local_compute): Likewise.
	* dominance.c (init_dom_info, calc_dfs_tree_nonrec, calc_dfs_tree,
	calc_idoms): Likewise.
	* domwalk.c (dom_walker::walk): Likewise.
	* dse.c (dse_step0, dse_step3): Likewise.
	* function.c (epilogue_done): Likewise.
	* gcse.c (alloc_gcse_mem, compute_local_properties,
	prune_insertions_deletions, compute_pre_data,
	pre_expr_reaches_here_p, one_pre_gcse_pass,
	compute_code_hoist_vbeinout, should_hoist_expr_to_dom, hoist_code,
	one_code_hoisting_pass): Likewise.
	* graph.c (draw_cfg_nodes_no_loops): Likewise.
	* graphite-sese-to-poly.c (build_scop_bbs): Likewise.
	* haifa-sched.c (unlink_bb_notes): Likewise.
	* ipa-split.c (execute_split_functions): Likewise.
	* ira-build.c (create_loop_tree_nodes,
	remove_unnecessary_regions): Likewise.
	* ira-emit.c (ira_emit): Likewise.
	* ira.c (find_moveable_pseudos, ira): Likewise.
	* lcm.c (compute_antinout_edge, compute_laterin,
	compute_insert_delete, pre_edge_lcm, compute_available,
	compute_nearerout, compute_rev_insert_delete,
	pre_edge_rev_lcm): Likewise.
	* loop-unroll.c (opt_info_start_duplication,
	apply_opt_in_copies): Likewise.
	* lower-subreg.c (decompose_multiword_subregs): Likewise.
	* lra-lives.c (lra_create_live_ranges): Likewise.
	* lra.c (lra): Likewise.
	* mode-switching.c (optimize_mode_switching): Likewise.
	* recog.c (split_all_insns): Likewise.
	* regcprop.c (copyprop_hardreg_forward): Likewise.
	* regrename.c (regrename_analyze): Likewise.
	* reload1.c (reload): Likewise.
	* resource.c (init_resource_info): Likewise.
	* sched-rgn.c (haifa_find_rgns, extend_rgns, compute_trg_info,
	realloc_bb_state_array, schedule_region, extend_regions): Likewise.
	* sel-sched-ir.c (sel_extend_global_bb_info, extend_region_bb_info,
	recompute_rev_top_order, sel_init_pipelining,
	make_regions_from_the_rest): Likewise.
	* store-motion.c (remove_reachable_equiv_notes,build_store_vectors)
	Likewise.
	* tracer.c (tail_duplicate): Likewise.
	* trans-mem.c (tm_region_init, get_bb_regions_instrumented): Likewise.
	* tree-cfg.c (create_bb, cleanup_dead_labels, gimple_dump_cfg,
	gimple_flow_call_edges_add): Likewise.
	* tree-cfgcleanup.c (split_bbs_on_noreturn_calls,
	cleanup_tree_cfg_1): Likewise.
	* tree-complex.c (tree_lower_complex): Likewise.
	* tree-inline.c (copy_cfg_body): Likewise.
	* tree-into-ssa.c (mark_phi_for_rewrite, rewrite_into_ssa,
	prepare_def_site_for, update_ssa): Likewise.
	* tree-ssa-dce.c (tree_dce_init, perform_tree_ssa_dce): Likewise.
	* tree-ssa-dom.c (record_edge_info): Likewise.
	* tree-ssa-live.c (new_tree_live_info, live_worklist): Likewise.
	* tree-ssa-loop-im.c (fill_always_executed_in_1): Likewise.
	* tree-ssa-loop-manip.c (copy_phi_node_args
	gimple_duplicate_loop_to_header_edge): Likewise.
	* tree-ssa-pre.c (compute_antic): Likewise.
	* tree-ssa-propagate.c (ssa_prop_init): Likewise.
	* tree-ssa-reassoc.c (init_reassoc): Likewise.
	* tree-ssa-sccvn.c (init_scc_vn): Likewise.
	* tree-ssa-tail-merge.c (init_worklist): Likewise.
	* tree-ssa-uncprop.c (associate_equivalences_with_edges): Likewise.
	* tree-stdarg.c (reachable_at_most_once): Likewise.
	* tree-vrp.c (find_assert_locations): Likewise.
	* var-tracking.c (vt_find_locations): Likewise.
---
 gcc/asan.c                             |   2 +-
 gcc/basic-block.h                      |   3 -
 gcc/bb-reorder.c                       |   7 ++-
 gcc/bt-load.c                          |  30 +++++-----
 gcc/cfg.c                              |   4 +-
 gcc/cfganal.c                          |  25 ++++----
 gcc/cfgbuild.c                         |   2 +-
 gcc/cfgexpand.c                        |   4 +-
 gcc/cfghooks.c                         |   4 +-
 gcc/cfgloop.c                          |   4 +-
 gcc/cfgloopanal.c                      |   4 +-
 gcc/cfgloopmanip.c                     |   6 +-
 gcc/cfgrtl.c                           |  13 +++--
 gcc/config/epiphany/resolve-sw-modes.c |   4 +-
 gcc/config/frv/frv.c                   |   4 +-
 gcc/config/mips/mips.c                 |   4 +-
 gcc/config/spu/spu.c                   |   2 +-
 gcc/cprop.c                            |  15 ++---
 gcc/cse.c                              |   4 +-
 gcc/df-core.c                          |  26 +++++----
 gcc/df-problems.c                      |  10 ++--
 gcc/dominance.c                        |  15 +++--
 gcc/domwalk.c                          |   2 +-
 gcc/dse.c                              |   4 +-
 gcc/function.c                         |   2 +-
 gcc/gcse.c                             |  32 +++++-----
 gcc/graph.c                            |   2 +-
 gcc/graphite-sese-to-poly.c            |   2 +-
 gcc/haifa-sched.c                      |   2 +-
 gcc/ipa-split.c                        |   2 +-
 gcc/ira-build.c                        |  13 +++--
 gcc/ira-emit.c                         |  10 ++--
 gcc/ira.c                              |  12 ++--
 gcc/lcm.c                              | 103 +++++++++++++++++++--------------
 gcc/loop-unroll.c                      |  10 +++-
 gcc/lower-subreg.c                     |   2 +-
 gcc/lra-lives.c                        |   2 +-
 gcc/lra.c                              |   2 +-
 gcc/mode-switching.c                   |  17 +++---
 gcc/recog.c                            |   2 +-
 gcc/regcprop.c                         |   4 +-
 gcc/regrename.c                        |   2 +-
 gcc/reload1.c                          |   2 +-
 gcc/resource.c                         |   2 +-
 gcc/sched-rgn.c                        |  43 ++++++++------
 gcc/sel-sched-ir.c                     |  17 +++---
 gcc/store-motion.c                     |  32 +++++-----
 gcc/tracer.c                           |   6 +-
 gcc/trans-mem.c                        |   4 +-
 gcc/tree-cfg.c                         |  19 +++---
 gcc/tree-cfgcleanup.c                  |   4 +-
 gcc/tree-complex.c                     |   2 +-
 gcc/tree-inline.c                      |   4 +-
 gcc/tree-into-ssa.c                    |  19 +++---
 gcc/tree-ssa-dce.c                     |   7 ++-
 gcc/tree-ssa-dom.c                     |   2 +-
 gcc/tree-ssa-live.c                    |  10 ++--
 gcc/tree-ssa-loop-im.c                 |   2 +-
 gcc/tree-ssa-loop-manip.c              |   8 +--
 gcc/tree-ssa-pre.c                     |   4 +-
 gcc/tree-ssa-propagate.c               |   4 +-
 gcc/tree-ssa-reassoc.c                 |   2 +-
 gcc/tree-ssa-sccvn.c                   |   2 +-
 gcc/tree-ssa-tail-merge.c              |   2 +-
 gcc/tree-ssa-uncprop.c                 |   2 +-
 gcc/tree-stdarg.c                      |   2 +-
 gcc/tree-vrp.c                         |  10 ++--
 gcc/var-tracking.c                     |   8 +--
 68 files changed, 349 insertions(+), 289 deletions(-)

diff --git a/gcc/asan.c b/gcc/asan.c
index 74140d6..09c0667 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -2041,7 +2041,7 @@ transform_statements (void)
 {
   basic_block bb, last_bb = NULL;
   gimple_stmt_iterator i;
-  int saved_last_basic_block = last_basic_block;
+  int saved_last_basic_block = last_basic_block_for_fn (cfun);
 
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index d000a43..174b650 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -326,9 +326,6 @@ struct GTY(()) control_flow_graph {
 #define SET_BASIC_BLOCK_FOR_FN(FN,N,BB) \
   ((*basic_block_info_for_fn (FN))[(N)] = (BB))
 
-/* Defines for textual backward source compatibility.  */
-#define last_basic_block	(cfun->cfg->x_last_basic_block)
-
 /* For iterating over basic blocks.  */
 #define FOR_BB_BETWEEN(BB, FROM, TO, DIR) \
   for (BB = FROM; BB != TO; BB = BB->DIR)
diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index fc7b5b7..363af2d 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -826,12 +826,13 @@ copy_bb (basic_block old_bb, edge e, basic_block bb, int trace)
 	     "Duplicated bb %d (created bb %d)\n",
 	     old_bb->index, new_bb->index);
 
-  if (new_bb->index >= array_size || last_basic_block > array_size)
+  if (new_bb->index >= array_size
+      || last_basic_block_for_fn (cfun) > array_size)
     {
       int i;
       int new_size;
 
-      new_size = MAX (last_basic_block, new_bb->index + 1);
+      new_size = MAX (last_basic_block_for_fn (cfun), new_bb->index + 1);
       new_size = GET_ARRAY_SIZE (new_size);
       bbd = XRESIZEVEC (bbro_basic_block_data, bbd, new_size);
       for (i = array_size; i < new_size; i++)
@@ -2234,7 +2235,7 @@ reorder_basic_blocks (void)
     uncond_jump_length = get_uncond_jump_length ();
 
   /* We need to know some information for each basic block.  */
-  array_size = GET_ARRAY_SIZE (last_basic_block);
+  array_size = GET_ARRAY_SIZE (last_basic_block_for_fn (cfun));
   bbd = XNEWVEC (bbro_basic_block_data, array_size);
   for (i = 0; i < array_size; i++)
     {
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index bbd0dd8..83b3eba 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -457,8 +457,8 @@ compute_defs_uses_and_gen (fibheap_t all_btr_defs, btr_def *def_array,
   btr_def_group all_btr_def_groups = NULL;
   defs_uses_info info;
 
-  bitmap_vector_clear (bb_gen, last_basic_block);
-  for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
+  bitmap_vector_clear (bb_gen, last_basic_block_for_fn (cfun));
+  for (i = NUM_FIXED_BLOCKS; i < last_basic_block_for_fn (cfun); i++)
     {
       basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
       int reg;
@@ -618,8 +618,8 @@ compute_kill (sbitmap *bb_kill, sbitmap *btr_defset,
 
   /* For each basic block, form the set BB_KILL - the set
      of definitions that the block kills.  */
-  bitmap_vector_clear (bb_kill, last_basic_block);
-  for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
+  bitmap_vector_clear (bb_kill, last_basic_block_for_fn (cfun));
+  for (i = NUM_FIXED_BLOCKS; i < last_basic_block_for_fn (cfun); i++)
     {
       for (regno = first_btr; regno <= last_btr; regno++)
 	if (TEST_HARD_REG_BIT (all_btrs, regno)
@@ -642,14 +642,14 @@ compute_out (sbitmap *bb_out, sbitmap *bb_gen, sbitmap *bb_kill, int max_uid)
   int changed;
   sbitmap bb_in = sbitmap_alloc (max_uid);
 
-  for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
+  for (i = NUM_FIXED_BLOCKS; i < last_basic_block_for_fn (cfun); i++)
     bitmap_copy (bb_out[i], bb_gen[i]);
 
   changed = 1;
   while (changed)
     {
       changed = 0;
-      for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
+      for (i = NUM_FIXED_BLOCKS; i < last_basic_block_for_fn (cfun); i++)
 	{
 	  bitmap_union_of_preds (bb_in, bb_out, BASIC_BLOCK_FOR_FN (cfun, i));
 	  changed |= bitmap_ior_and_compl (bb_out[i], bb_gen[i],
@@ -668,7 +668,7 @@ link_btr_uses (btr_def *def_array, btr_user *use_array, sbitmap *bb_out,
 
   /* Link uses to the uses lists of all of their reaching defs.
      Count up the number of reaching defs of each use.  */
-  for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
+  for (i = NUM_FIXED_BLOCKS; i < last_basic_block_for_fn (cfun); i++)
     {
       basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
       rtx insn;
@@ -780,8 +780,10 @@ build_btr_def_use_webs (fibheap_t all_btr_defs)
   btr_user *use_array   = XCNEWVEC (btr_user, max_uid);
   sbitmap *btr_defset   = sbitmap_vector_alloc (
 			   (last_btr - first_btr) + 1, max_uid);
-  sbitmap *bb_gen      = sbitmap_vector_alloc (last_basic_block, max_uid);
-  HARD_REG_SET *btrs_written = XCNEWVEC (HARD_REG_SET, last_basic_block);
+  sbitmap *bb_gen = sbitmap_vector_alloc (last_basic_block_for_fn (cfun),
+					  max_uid);
+  HARD_REG_SET *btrs_written = XCNEWVEC (HARD_REG_SET,
+					 last_basic_block_for_fn (cfun));
   sbitmap *bb_kill;
   sbitmap *bb_out;
 
@@ -790,11 +792,11 @@ build_btr_def_use_webs (fibheap_t all_btr_defs)
   compute_defs_uses_and_gen (all_btr_defs, def_array, use_array, btr_defset,
 			     bb_gen, btrs_written);
 
-  bb_kill = sbitmap_vector_alloc (last_basic_block, max_uid);
+  bb_kill = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), max_uid);
   compute_kill (bb_kill, btr_defset, btrs_written);
   free (btrs_written);
 
-  bb_out = sbitmap_vector_alloc (last_basic_block, max_uid);
+  bb_out = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), max_uid);
   compute_out (bb_out, bb_gen, bb_kill, max_uid);
 
   sbitmap_vector_free (bb_gen);
@@ -1405,7 +1407,7 @@ migrate_btr_defs (enum reg_class btr_class, int allow_callee_save)
     {
       int i;
 
-      for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
+      for (i = NUM_FIXED_BLOCKS; i < last_basic_block_for_fn (cfun); i++)
 	{
 	  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 	  fprintf (dump_file,
@@ -1428,8 +1430,8 @@ migrate_btr_defs (enum reg_class btr_class, int allow_callee_save)
 	  first_btr = reg;
       }
 
-  btrs_live = XCNEWVEC (HARD_REG_SET, last_basic_block);
-  btrs_live_at_end = XCNEWVEC (HARD_REG_SET, last_basic_block);
+  btrs_live = XCNEWVEC (HARD_REG_SET, last_basic_block_for_fn (cfun));
+  btrs_live_at_end = XCNEWVEC (HARD_REG_SET, last_basic_block_for_fn (cfun));
 
   build_btr_def_use_webs (all_btr_defs);
 
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 3337372..6c3181d 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -171,10 +171,10 @@ compact_blocks (void)
 	}
       gcc_assert (i == n_basic_blocks_for_fn (cfun));
 
-      for (; i < last_basic_block; i++)
+      for (; i < last_basic_block_for_fn (cfun); i++)
 	SET_BASIC_BLOCK_FOR_FN (cfun, i, NULL);
     }
-  last_basic_block = n_basic_blocks_for_fn (cfun);
+  last_basic_block_for_fn (cfun) = n_basic_blocks_for_fn (cfun);
 }
 
 /* Remove block B from the basic block array.  */
diff --git a/gcc/cfganal.c b/gcc/cfganal.c
index ad5928a..9900d82 100644
--- a/gcc/cfganal.c
+++ b/gcc/cfganal.c
@@ -72,15 +72,15 @@ mark_dfs_back_edges (void)
   bool found = false;
 
   /* Allocate the preorder and postorder number arrays.  */
-  pre = XCNEWVEC (int, last_basic_block);
-  post = XCNEWVEC (int, last_basic_block);
+  pre = XCNEWVEC (int, last_basic_block_for_fn (cfun));
+  post = XCNEWVEC (int, last_basic_block_for_fn (cfun));
 
   /* Allocate stack for back-tracking up CFG.  */
   stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
 
   /* None of the nodes in the CFG have been visited yet.  */
   bitmap_clear (visited);
@@ -428,8 +428,8 @@ control_dependences::control_dependences (struct edge_list *edges)
   : m_el (edges)
 {
   timevar_push (TV_CONTROL_DEPENDENCES);
-  control_dependence_map.create (last_basic_block);
-  for (int i = 0; i < last_basic_block; ++i)
+  control_dependence_map.create (last_basic_block_for_fn (cfun));
+  for (int i = 0; i < last_basic_block_for_fn (cfun); ++i)
     control_dependence_map.quick_push (BITMAP_ALLOC (NULL));
   for (int i = 0; i < NUM_EDGES (m_el); ++i)
     find_control_dependence (i);
@@ -622,7 +622,7 @@ post_order_compute (int *post_order, bool include_entry_exit,
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
 
   /* None of the nodes in the CFG have been visited yet.  */
   bitmap_clear (visited);
@@ -778,7 +778,7 @@ inverted_post_order_compute (int *post_order)
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
 
   /* None of the nodes in the CFG have been visited yet.  */
   bitmap_clear (visited);
@@ -931,7 +931,7 @@ pre_and_rev_post_order_compute_fn (struct function *fn,
     rev_post_order_num -= NUM_FIXED_BLOCKS;
 
   /* Allocate bitmap to track nodes that have been visited.  */
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
 
   /* None of the nodes in the CFG have been visited yet.  */
   bitmap_clear (visited);
@@ -1062,7 +1062,7 @@ flow_dfs_compute_reverse_init (depth_first_search_ds data)
   data->sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
-  data->visited_blocks = sbitmap_alloc (last_basic_block);
+  data->visited_blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
 
   /* None of the nodes in the CFG have been visited yet.  */
   bitmap_clear (data->visited_blocks);
@@ -1147,7 +1147,7 @@ dfs_enumerate_from (basic_block bb, int reverse,
 #define VISITED_P(BB) (bitmap_bit_p (visited, (BB)->index))
 
   /* Resize the VISITED sbitmap if necessary.  */
-  size = last_basic_block;
+  size = last_basic_block_for_fn (cfun);
   if (size < 10)
     size = 10;
 
@@ -1313,7 +1313,8 @@ compute_idf (bitmap def_blocks, bitmap_head *dfs)
 	 form, the basic blocks where new and/or old names are defined
 	 may have disappeared by CFG cleanup calls.  In this case,
 	 we may pull a non-existing block from the work stack.  */
-      gcc_checking_assert (bb_index < (unsigned) last_basic_block);
+      gcc_checking_assert (bb_index
+			   < (unsigned) last_basic_block_for_fn (cfun));
 
       EXECUTE_IF_AND_COMPL_IN_BITMAP (&dfs[bb_index], phi_insertion_points,
 	                              0, i, bi)
@@ -1508,7 +1509,7 @@ single_pred_before_succ_order (void)
   basic_block *order = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   unsigned n = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
   unsigned np, i;
-  sbitmap visited = sbitmap_alloc (last_basic_block);
+  sbitmap visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
 
 #define MARK_VISITED(BB) (bitmap_set_bit (visited, (BB)->index))
 #define VISITED_P(BB) (bitmap_bit_p (visited, (BB)->index))
diff --git a/gcc/cfgbuild.c b/gcc/cfgbuild.c
index a0c2c66..f73bbc5 100644
--- a/gcc/cfgbuild.c
+++ b/gcc/cfgbuild.c
@@ -209,7 +209,7 @@ make_edges (basic_block min, basic_block max, int update_p)
      nearly fully-connected CFGs.  In that case we spend a significant
      amount of time searching the edge lists for duplicates.  */
   if (forced_labels || cfun->cfg->max_jumptable_ents > 100)
-    edge_cache = sbitmap_alloc (last_basic_block);
+    edge_cache = sbitmap_alloc (last_basic_block_for_fn (cfun));
 
   /* By nature of the way these get numbered, ENTRY_BLOCK_PTR->next_bb block
      is always the entry.  */
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index d98ac5b..014f78b 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -501,7 +501,7 @@ add_scope_conflicts (void)
   FOR_ALL_BB (bb)
     bb->aux = BITMAP_ALLOC (&stack_var_bitmap_obstack);
 
-  rpo = XNEWVEC (int, last_basic_block);
+  rpo = XNEWVEC (int, last_basic_block_for_fn (cfun));
   n_bbs = pre_and_rev_post_order_compute (NULL, rpo, false);
 
   changed = true;
@@ -5809,7 +5809,7 @@ gimple_expand_cfg (void)
 	}
     }
 
-  blocks = sbitmap_alloc (last_basic_block);
+  blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_ones (blocks);
   find_many_sub_basic_blocks (blocks);
   sbitmap_free (blocks);
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index ab1c15f..2400965 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -98,8 +98,8 @@ verify_flow_info (void)
   basic_block *last_visited;
 
   timevar_push (TV_CFG_VERIFY);
-  last_visited = XCNEWVEC (basic_block, last_basic_block);
-  edge_checksum = XCNEWVEC (size_t, last_basic_block);
+  last_visited = XCNEWVEC (basic_block, last_basic_block_for_fn (cfun));
+  edge_checksum = XCNEWVEC (size_t, last_basic_block_for_fn (cfun));
 
   /* Check bb chain & numbers.  */
   last_bb_seen = ENTRY_BLOCK_PTR_FOR_FN (cfun);
diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
index 6245605..9d28950 100644
--- a/gcc/cfgloop.c
+++ b/gcc/cfgloop.c
@@ -1364,7 +1364,7 @@ verify_loop_structure (void)
       }
 
   /* Check the recorded loop father and sizes of loops.  */
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (visited);
   bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   FOR_EACH_LOOP (loop, LI_FROM_INNERMOST)
@@ -1478,7 +1478,7 @@ verify_loop_structure (void)
   if (loops_state_satisfies_p (LOOPS_HAVE_MARKED_IRREDUCIBLE_REGIONS))
     {
       /* Record old info.  */
-      irreds = sbitmap_alloc (last_basic_block);
+      irreds = sbitmap_alloc (last_basic_block_for_fn (cfun));
       FOR_EACH_BB (bb)
 	{
 	  edge_iterator ei;
diff --git a/gcc/cfgloopanal.c b/gcc/cfgloopanal.c
index 2260f4b..84b61c1 100644
--- a/gcc/cfgloopanal.c
+++ b/gcc/cfgloopanal.c
@@ -64,7 +64,7 @@ just_once_each_iteration_p (const struct loop *loop, const_basic_block bb)
 
    LOOPS is the loop tree.  */
 
-#define LOOP_REPR(LOOP) ((LOOP)->num + last_basic_block)
+#define LOOP_REPR(LOOP) ((LOOP)->num + last_basic_block_for_fn (cfun))
 #define BB_REPR(BB) ((BB)->index + 1)
 
 bool
@@ -94,7 +94,7 @@ mark_irreducible_loops (void)
     }
 
   /* Create the edge lists.  */
-  g = new_graph (last_basic_block + num);
+  g = new_graph (last_basic_block_for_fn (cfun) + num);
 
   FOR_BB_BETWEEN (act, ENTRY_BLOCK_PTR_FOR_FN (cfun),
 		  EXIT_BLOCK_PTR_FOR_FN (cfun), next_bb)
diff --git a/gcc/cfgloopmanip.c b/gcc/cfgloopmanip.c
index 7a6b201..2bb8b6a 100644
--- a/gcc/cfgloopmanip.c
+++ b/gcc/cfgloopmanip.c
@@ -204,7 +204,7 @@ fix_bb_placements (basic_block from,
       || from == base_loop->header)
     return;
 
-  in_queue = sbitmap_alloc (last_basic_block);
+  in_queue = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (in_queue);
   bitmap_set_bit (in_queue, from->index);
   /* Prevent us from going out of the base_loop.  */
@@ -348,7 +348,7 @@ remove_path (edge e)
 
   n_bord_bbs = 0;
   bord_bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
-  seen = sbitmap_alloc (last_basic_block);
+  seen = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (seen);
 
   /* Find "border" hexes -- i.e. those with predecessor in removed path.  */
@@ -623,7 +623,7 @@ update_dominators_in_loop (struct loop *loop)
   basic_block *body;
   unsigned i;
 
-  seen = sbitmap_alloc (last_basic_block);
+  seen = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (seen);
   body = get_loop_body (loop);
 
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 34fe4f3..5dc52a6 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -328,7 +328,7 @@ create_basic_block_structure (rtx head, rtx end, rtx bb_note, basic_block after)
 
   BB_HEAD (bb) = head;
   BB_END (bb) = end;
-  bb->index = last_basic_block++;
+  bb->index = last_basic_block_for_fn (cfun)++;
   bb->flags = BB_NEW | BB_RTL;
   link_block (bb, after);
   SET_BASIC_BLOCK_FOR_FN (cfun, bb->index, bb);
@@ -355,9 +355,12 @@ rtl_create_basic_block (void *headp, void *endp, basic_block after)
   basic_block bb;
 
   /* Grow the basic block array if needed.  */
-  if ((size_t) last_basic_block >= basic_block_info_for_fn (cfun)->length ())
+  if ((size_t) last_basic_block_for_fn (cfun)
+      >= basic_block_info_for_fn (cfun)->length ())
     {
-      size_t new_size = last_basic_block + (last_basic_block + 3) / 4;
+      size_t new_size =
+	(last_basic_block_for_fn (cfun)
+	 + (last_basic_block_for_fn (cfun) + 3) / 4);
       vec_safe_grow_cleared (basic_block_info_for_fn (cfun), new_size);
     }
 
@@ -4252,7 +4255,7 @@ break_superblocks (void)
   bool need = false;
   basic_block bb;
 
-  superblocks = sbitmap_alloc (last_basic_block);
+  superblocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (superblocks);
 
   FOR_EACH_BB (bb)
@@ -4778,7 +4781,7 @@ rtl_flow_call_edges_add (sbitmap blocks)
 {
   int i;
   int blocks_split = 0;
-  int last_bb = last_basic_block;
+  int last_bb = last_basic_block_for_fn (cfun);
   bool check_last_block = false;
 
   if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
diff --git a/gcc/config/epiphany/resolve-sw-modes.c b/gcc/config/epiphany/resolve-sw-modes.c
index b43b4d9..a780254 100644
--- a/gcc/config/epiphany/resolve-sw-modes.c
+++ b/gcc/config/epiphany/resolve-sw-modes.c
@@ -61,8 +61,8 @@ resolve_sw_modes (void)
   bool need_commit = false;
   bool finalize_fp_sets = (MACHINE_FUNCTION (cfun)->unknown_mode_sets == 0);
 
-  todo.create (last_basic_block);
-  pushed = sbitmap_alloc (last_basic_block);
+  todo.create (last_basic_block_for_fn (cfun));
+  pushed = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (pushed);
   if (!finalize_fp_sets)
     {
diff --git a/gcc/config/frv/frv.c b/gcc/config/frv/frv.c
index a5eb2c1..a5aeb75 100644
--- a/gcc/config/frv/frv.c
+++ b/gcc/config/frv/frv.c
@@ -8067,8 +8067,8 @@ frv_optimize_membar (void)
   rtx *last_membar;
 
   compute_bb_for_insn ();
-  first_io = XCNEWVEC (struct frv_io, last_basic_block);
-  last_membar = XCNEWVEC (rtx, last_basic_block);
+  first_io = XCNEWVEC (struct frv_io, last_basic_block_for_fn (cfun));
+  last_membar = XCNEWVEC (rtx, last_basic_block_for_fn (cfun));
 
   FOR_EACH_BB (bb)
     frv_optimize_membar_local (bb, &first_io[bb->index],
diff --git a/gcc/config/mips/mips.c b/gcc/config/mips/mips.c
index 7903443..f19478c 100644
--- a/gcc/config/mips/mips.c
+++ b/gcc/config/mips/mips.c
@@ -15071,11 +15071,11 @@ r10k_insert_cache_barriers (void)
 
   /* Bit X of PROTECTED_BBS is set if the last operation in basic block
      X is protected by a cache barrier.  */
-  protected_bbs = sbitmap_alloc (last_basic_block);
+  protected_bbs = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (protected_bbs);
 
   /* Iterate over the basic blocks in reverse post-order.  */
-  rev_post_order = XNEWVEC (int, last_basic_block);
+  rev_post_order = XNEWVEC (int, last_basic_block_for_fn (cfun));
   n = pre_and_rev_post_order_compute (NULL, rev_post_order, false);
   for (i = 0; i < n; i++)
     {
diff --git a/gcc/config/spu/spu.c b/gcc/config/spu/spu.c
index a658ee6..1a9895e 100644
--- a/gcc/config/spu/spu.c
+++ b/gcc/config/spu/spu.c
@@ -2469,7 +2469,7 @@ spu_machine_dependent_reorg (void)
       return;
     }
 
-  blocks = sbitmap_alloc (last_basic_block);
+  blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (blocks);
 
   in_spu_reorg = 1;
diff --git a/gcc/cprop.c b/gcc/cprop.c
index 9b8bd1e..600c617 100644
--- a/gcc/cprop.c
+++ b/gcc/cprop.c
@@ -595,8 +595,8 @@ compute_local_properties (sbitmap *kill, sbitmap *comp,
   unsigned int i;
 
   /* Initialize the bitmaps that were passed in.  */
-  bitmap_vector_clear (kill, last_basic_block);
-  bitmap_vector_clear (comp, last_basic_block);
+  bitmap_vector_clear (kill, last_basic_block_for_fn (cfun));
+  bitmap_vector_clear (comp, last_basic_block_for_fn (cfun));
 
   for (i = 0; i < table->size; i++)
     {
@@ -1355,7 +1355,7 @@ find_implicit_sets (void)
   rtx cond, new_rtx;
   unsigned int count = 0;
   bool edges_split = false;
-  size_t implicit_sets_size = last_basic_block + 10;
+  size_t implicit_sets_size = last_basic_block_for_fn (cfun) + 10;
 
   implicit_sets = XCNEWVEC (rtx, implicit_sets_size);
 
@@ -1667,7 +1667,7 @@ bypass_conditional_jumps (void)
   if (ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun))
     return 0;
 
-  bypass_last_basic_block = last_basic_block;
+  bypass_last_basic_block = last_basic_block_for_fn (cfun);
   mark_dfs_back_edges ();
 
   changed = 0;
@@ -1809,8 +1809,8 @@ one_cprop_pass (void)
     df_analyze ();
 
   /* Initialize implicit_set_indexes array.  */
-  implicit_set_indexes = XNEWVEC (int, last_basic_block);
-  for (i = 0; i < last_basic_block; i++)
+  implicit_set_indexes = XNEWVEC (int, last_basic_block_for_fn (cfun));
+  for (i = 0; i < last_basic_block_for_fn (cfun); i++)
     implicit_set_indexes[i] = -1;
 
   alloc_hash_table (&set_hash_table);
@@ -1827,7 +1827,8 @@ one_cprop_pass (void)
       basic_block bb;
       rtx insn;
 
-      alloc_cprop_mem (last_basic_block, set_hash_table.n_elems);
+      alloc_cprop_mem (last_basic_block_for_fn (cfun),
+		       set_hash_table.n_elems);
       compute_cprop_data ();
 
       free (implicit_set_indexes);
diff --git a/gcc/cse.c b/gcc/cse.c
index 215beb0..74ae8ba 100644
--- a/gcc/cse.c
+++ b/gcc/cse.c
@@ -6522,7 +6522,7 @@ cse_main (rtx f ATTRIBUTE_UNUSED, int nregs)
 {
   struct cse_basic_block_data ebb_data;
   basic_block bb;
-  int *rc_order = XNEWVEC (int, last_basic_block);
+  int *rc_order = XNEWVEC (int, last_basic_block_for_fn (cfun));
   int i, n_blocks;
 
   df_set_flags (DF_LR_RUN_DCE);
@@ -6551,7 +6551,7 @@ cse_main (rtx f ATTRIBUTE_UNUSED, int nregs)
   reg_eqv_table = XNEWVEC (struct reg_eqv_elem, nregs);
 
   /* Set up the table of already visited basic blocks.  */
-  cse_visited_basic_blocks = sbitmap_alloc (last_basic_block);
+  cse_visited_basic_blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (cse_visited_basic_blocks);
 
   /* Loop over basic blocks in reverse completion order (RPO),
diff --git a/gcc/df-core.c b/gcc/df-core.c
index 87419c2..d41fb72 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -721,8 +721,8 @@ rest_of_handle_df_initialize (void)
   if (optimize > 1)
     df_live_add_problem ();
 
-  df->postorder = XNEWVEC (int, last_basic_block);
-  df->postorder_inverted = XNEWVEC (int, last_basic_block);
+  df->postorder = XNEWVEC (int, last_basic_block_for_fn (cfun));
+  df->postorder_inverted = XNEWVEC (int, last_basic_block_for_fn (cfun));
   df->n_blocks = post_order_compute (df->postorder, true, true);
   df->n_blocks_inverted = inverted_post_order_compute (df->postorder_inverted);
   gcc_assert (df->n_blocks == df->n_blocks_inverted);
@@ -1115,7 +1115,7 @@ df_worklist_dataflow (struct dataflow *dataflow,
                       int n_blocks)
 {
   bitmap pending = BITMAP_ALLOC (&df_bitmap_obstack);
-  sbitmap considered = sbitmap_alloc (last_basic_block);
+  sbitmap considered = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_iterator bi;
   unsigned int *bbindex_to_postorder;
   int i;
@@ -1125,11 +1125,12 @@ df_worklist_dataflow (struct dataflow *dataflow,
   gcc_assert (dir != DF_NONE);
 
   /* BBINDEX_TO_POSTORDER maps the bb->index to the reverse postorder.  */
-  bbindex_to_postorder = XNEWVEC (unsigned int, last_basic_block);
+  bbindex_to_postorder = XNEWVEC (unsigned int,
+				  last_basic_block_for_fn (cfun));
 
   /* Initialize the array to an out-of-bound value.  */
-  for (i = 0; i < last_basic_block; i++)
-    bbindex_to_postorder[i] = last_basic_block;
+  for (i = 0; i < last_basic_block_for_fn (cfun); i++)
+    bbindex_to_postorder[i] = last_basic_block_for_fn (cfun);
 
   /* Initialize the considered map.  */
   bitmap_clear (considered);
@@ -1236,8 +1237,8 @@ df_analyze (void)
 
   free (df->postorder);
   free (df->postorder_inverted);
-  df->postorder = XNEWVEC (int, last_basic_block);
-  df->postorder_inverted = XNEWVEC (int, last_basic_block);
+  df->postorder = XNEWVEC (int, last_basic_block_for_fn (cfun));
+  df->postorder_inverted = XNEWVEC (int, last_basic_block_for_fn (cfun));
   df->n_blocks = post_order_compute (df->postorder, true, true);
   df->n_blocks_inverted = inverted_post_order_compute (df->postorder_inverted);
 
@@ -1481,7 +1482,7 @@ df_set_bb_dirty (basic_block bb)
 void
 df_grow_bb_info (struct dataflow *dflow)
 {
-  unsigned int new_size = last_basic_block + 1;
+  unsigned int new_size = last_basic_block_for_fn (cfun) + 1;
   if (dflow->block_info_size < new_size)
     {
       new_size += new_size / 4;
@@ -1553,7 +1554,8 @@ df_compact_blocks (void)
       /* Now shuffle the block info for the problem.  */
       if (dflow->problem->free_bb_fun)
 	{
-	  int size = last_basic_block * dflow->problem->block_info_elt_size;
+	  int size = (last_basic_block_for_fn (cfun)
+		      * dflow->problem->block_info_elt_size);
 	  problem_temps = XNEWVAR (char, size);
 	  df_grow_bb_info (dflow);
 	  memcpy (problem_temps, dflow->block_info, size);
@@ -1571,7 +1573,7 @@ df_compact_blocks (void)
 	    }
 	  memset ((char *)dflow->block_info
 		  + i * dflow->problem->block_info_elt_size, 0,
-		  (last_basic_block - i)
+		  (last_basic_block_for_fn (cfun) - i)
 		  * dflow->problem->block_info_elt_size);
 	  free (problem_temps);
 	}
@@ -1608,7 +1610,7 @@ df_compact_blocks (void)
 
   gcc_assert (i == n_basic_blocks_for_fn (cfun));
 
-  for (; i < last_basic_block; i++)
+  for (; i < last_basic_block_for_fn (cfun); i++)
     SET_BASIC_BLOCK_FOR_FN (cfun, i, NULL);
 
 #ifdef DF_DEBUG_CFG
diff --git a/gcc/df-problems.c b/gcc/df-problems.c
index 2b42b48..ab19372 100644
--- a/gcc/df-problems.c
+++ b/gcc/df-problems.c
@@ -1173,8 +1173,8 @@ df_lr_verify_solution_start (void)
   df_lr->solutions_dirty = true;
 
   problem_data = (struct df_lr_problem_data *)df_lr->problem_data;
-  problem_data->in = XNEWVEC (bitmap_head, last_basic_block);
-  problem_data->out = XNEWVEC (bitmap_head, last_basic_block);
+  problem_data->in = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
+  problem_data->out = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
 
   FOR_ALL_BB (bb)
     {
@@ -1710,8 +1710,8 @@ df_live_verify_solution_start (void)
   df_live->solutions_dirty = true;
 
   problem_data = (struct df_live_problem_data *)df_live->problem_data;
-  problem_data->in = XNEWVEC (bitmap_head, last_basic_block);
-  problem_data->out = XNEWVEC (bitmap_head, last_basic_block);
+  problem_data->in = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
+  problem_data->out = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
 
   FOR_ALL_BB (bb)
     {
@@ -4315,7 +4315,7 @@ df_md_local_compute (bitmap all_blocks)
 
   bitmap_clear (&seen_in_insn);
 
-  frontiers = XNEWVEC (bitmap_head, last_basic_block);
+  frontiers = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
   FOR_ALL_BB (bb)
     bitmap_initialize (&frontiers[bb->index], &bitmap_default_obstack);
 
diff --git a/gcc/dominance.c b/gcc/dominance.c
index e9d2265..af73078 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -159,7 +159,8 @@ init_dom_info (struct dom_info *di, enum cdi_direction dir)
   init_ar (di->set_size, unsigned int, num, 1);
   init_ar (di->set_child, TBB, num, 0);
 
-  init_ar (di->dfs_order, TBB, (unsigned int) last_basic_block + 1, 0);
+  init_ar (di->dfs_order, TBB,
+	   (unsigned int) last_basic_block_for_fn (cfun) + 1, 0);
   init_ar (di->dfs_to_bb, basic_block, num, 0);
 
   di->dfsnum = 1;
@@ -296,7 +297,7 @@ calc_dfs_tree_nonrec (struct dom_info *di, basic_block bb, bool reverse)
 	  if (bb != en_block)
 	    my_i = di->dfs_order[bb->index];
 	  else
-	    my_i = di->dfs_order[last_basic_block];
+	    my_i = di->dfs_order[last_basic_block_for_fn (cfun)];
 	  child_i = di->dfs_order[bn->index] = di->dfsnum++;
 	  di->dfs_to_bb[child_i] = bn;
 	  di->dfs_parent[child_i] = my_i;
@@ -335,7 +336,7 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
   /* The first block is the ENTRY_BLOCK (or EXIT_BLOCK if REVERSE).  */
   basic_block begin = (reverse
 		       ? EXIT_BLOCK_PTR_FOR_FN (cfun) : ENTRY_BLOCK_PTR_FOR_FN (cfun));
-  di->dfs_order[last_basic_block] = di->dfsnum;
+  di->dfs_order[last_basic_block_for_fn (cfun)] = di->dfsnum;
   di->dfs_to_bb[di->dfsnum] = begin;
   di->dfsnum++;
 
@@ -367,7 +368,8 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
 	  bitmap_set_bit (di->fake_exit_edge, b->index);
 	  di->dfs_order[b->index] = di->dfsnum;
 	  di->dfs_to_bb[di->dfsnum] = b;
-	  di->dfs_parent[di->dfsnum] = di->dfs_order[last_basic_block];
+	  di->dfs_parent[di->dfsnum] =
+	    di->dfs_order[last_basic_block_for_fn (cfun)];
 	  di->dfsnum++;
 	  calc_dfs_tree_nonrec (di, b, reverse);
 	}
@@ -384,7 +386,8 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
 	      bitmap_set_bit (di->fake_exit_edge, b2->index);
 	      di->dfs_order[b2->index] = di->dfsnum;
 	      di->dfs_to_bb[di->dfsnum] = b2;
-	      di->dfs_parent[di->dfsnum] = di->dfs_order[last_basic_block];
+	      di->dfs_parent[di->dfsnum] =
+		di->dfs_order[last_basic_block_for_fn (cfun)];
 	      di->dfsnum++;
 	      calc_dfs_tree_nonrec (di, b2, reverse);
 	      gcc_checking_assert (di->dfs_order[b->index]);
@@ -546,7 +549,7 @@ calc_idoms (struct dom_info *di, bool reverse)
 	  if (b == en_block)
 	    {
 	    do_fake_exit_edge:
-	      k1 = di->dfs_order[last_basic_block];
+	      k1 = di->dfs_order[last_basic_block_for_fn (cfun)];
 	    }
 	  else
 	    k1 = di->dfs_order[b->index];
diff --git a/gcc/domwalk.c b/gcc/domwalk.c
index 3350e4b..e84c8f7 100644
--- a/gcc/domwalk.c
+++ b/gcc/domwalk.c
@@ -159,7 +159,7 @@ dom_walker::walk (basic_block bb)
     {
       postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
       postorder_num = inverted_post_order_compute (postorder);
-      bb_postorder = XNEWVEC (int, last_basic_block);
+      bb_postorder = XNEWVEC (int, last_basic_block_for_fn (cfun));
       for (int i = 0; i < postorder_num; ++i)
 	bb_postorder[postorder[i]] = i;
       free (postorder);
diff --git a/gcc/dse.c b/gcc/dse.c
index 2d8ce1e..a926cb8 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -772,7 +772,7 @@ dse_step0 (void)
 
   rtx_group_table.create (11);
 
-  bb_table = XNEWVEC (bb_info_t, last_basic_block);
+  bb_table = XNEWVEC (bb_info_t, last_basic_block_for_fn (cfun));
   rtx_group_next_id = 0;
 
   stores_off_frame_dead_at_return = !cfun->stdarg;
@@ -3283,7 +3283,7 @@ static void
 dse_step3 (bool for_spills)
 {
   basic_block bb;
-  sbitmap unreachable_blocks = sbitmap_alloc (last_basic_block);
+  sbitmap unreachable_blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   sbitmap_iterator sbi;
   bitmap all_ones = NULL;
   unsigned int i;
diff --git a/gcc/function.c b/gcc/function.c
index 2c8d781..d257af4 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -6498,7 +6498,7 @@ epilogue_done:
       commit_edge_insertions ();
 
       /* Look for basic blocks within the prologue insns.  */
-      blocks = sbitmap_alloc (last_basic_block);
+      blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
       bitmap_clear (blocks);
       bitmap_set_bit (blocks, entry_edge->dest->index);
       bitmap_set_bit (blocks, orig_entry_edge->dest->index);
diff --git a/gcc/gcse.c b/gcc/gcse.c
index 8928c85..fa25a46 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -633,8 +633,9 @@ alloc_gcse_mem (void)
      pre-processor limitation with template types in macro arguments.  */
   typedef vec<rtx> vec_rtx_heap;
   typedef vec<modify_pair> vec_modify_pair_heap;
-  modify_mem_list = GCNEWVEC (vec_rtx_heap, last_basic_block);
-  canon_modify_mem_list = GCNEWVEC (vec_modify_pair_heap, last_basic_block);
+  modify_mem_list = GCNEWVEC (vec_rtx_heap, last_basic_block_for_fn (cfun));
+  canon_modify_mem_list = GCNEWVEC (vec_modify_pair_heap,
+				    last_basic_block_for_fn (cfun));
   modify_mem_list_set = BITMAP_ALLOC (NULL);
   blocks_with_calls = BITMAP_ALLOC (NULL);
 }
@@ -685,13 +686,13 @@ compute_local_properties (sbitmap *transp, sbitmap *comp, sbitmap *antloc,
   /* Initialize any bitmaps that were passed in.  */
   if (transp)
     {
-      bitmap_vector_ones (transp, last_basic_block);
+      bitmap_vector_ones (transp, last_basic_block_for_fn (cfun));
     }
 
   if (comp)
-    bitmap_vector_clear (comp, last_basic_block);
+    bitmap_vector_clear (comp, last_basic_block_for_fn (cfun));
   if (antloc)
-    bitmap_vector_clear (antloc, last_basic_block);
+    bitmap_vector_clear (antloc, last_basic_block_for_fn (cfun));
 
   for (i = 0; i < table->size; i++)
     {
@@ -1972,7 +1973,7 @@ prune_insertions_deletions (int n_elems)
 
   /* Similarly for deletions, but those occur in blocks rather than on
      edges.  */
-  for (i = 0; i < (unsigned) last_basic_block; i++)
+  for (i = 0; i < (unsigned) last_basic_block_for_fn (cfun); i++)
     {
       EXECUTE_IF_SET_IN_BITMAP (pre_delete_map[i], 0, j, sbi)
 	deletions[j]++;
@@ -1993,7 +1994,7 @@ prune_insertions_deletions (int n_elems)
       for (i = 0; i < (unsigned) n_edges_for_fn (cfun); i++)
 	bitmap_clear_bit (pre_insert_map[i], j);
 
-      for (i = 0; i < (unsigned) last_basic_block; i++)
+      for (i = 0; i < (unsigned) last_basic_block_for_fn (cfun); i++)
 	bitmap_clear_bit (pre_delete_map[i], j);
     }
 
@@ -2012,7 +2013,7 @@ compute_pre_data (void)
 
   compute_local_properties (transp, comp, antloc, &expr_hash_table);
   prune_expressions (true);
-  bitmap_vector_clear (ae_kill, last_basic_block);
+  bitmap_vector_clear (ae_kill, last_basic_block_for_fn (cfun));
 
   /* Compute ae_kill for each basic block using:
 
@@ -2103,7 +2104,7 @@ static int
 pre_expr_reaches_here_p (basic_block occr_bb, struct expr *expr, basic_block bb)
 {
   int rval;
-  char *visited = XCNEWVEC (char, last_basic_block);
+  char *visited = XCNEWVEC (char, last_basic_block_for_fn (cfun));
 
   rval = pre_expr_reaches_here_p_work (occr_bb, expr, bb, visited);
 
@@ -2687,7 +2688,7 @@ one_pre_gcse_pass (void)
   if (expr_hash_table.n_elems > 0)
     {
       struct edge_list *edge_list;
-      alloc_pre_mem (last_basic_block, expr_hash_table.n_elems);
+      alloc_pre_mem (last_basic_block_for_fn (cfun), expr_hash_table.n_elems);
       edge_list = compute_pre_data ();
       changed |= pre_gcse (edge_list);
       free_edge_list (edge_list);
@@ -2816,8 +2817,8 @@ compute_code_hoist_vbeinout (void)
   int changed, passes;
   basic_block bb;
 
-  bitmap_vector_clear (hoist_vbeout, last_basic_block);
-  bitmap_vector_clear (hoist_vbein, last_basic_block);
+  bitmap_vector_clear (hoist_vbeout, last_basic_block_for_fn (cfun));
+  bitmap_vector_clear (hoist_vbein, last_basic_block_for_fn (cfun));
 
   passes = 0;
   changed = 1;
@@ -3033,7 +3034,7 @@ should_hoist_expr_to_dom (basic_block expr_bb, struct expr *expr,
   if (visited == NULL)
     {
       visited_allocated_locally = 1;
-      visited = sbitmap_alloc (last_basic_block);
+      visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
       bitmap_clear (visited);
     }
 
@@ -3166,7 +3167,7 @@ hoist_code (void)
      data to restrict distance an expression can travel.  */
 
   to_bb_head = XCNEWVEC (int, get_max_uid ());
-  bb_size = XCNEWVEC (int, last_basic_block);
+  bb_size = XCNEWVEC (int, last_basic_block_for_fn (cfun));
 
   FOR_EACH_BB (bb)
     {
@@ -3622,7 +3623,8 @@ one_code_hoisting_pass (void)
 
   if (expr_hash_table.n_elems > 0)
     {
-      alloc_code_hoist_mem (last_basic_block, expr_hash_table.n_elems);
+      alloc_code_hoist_mem (last_basic_block_for_fn (cfun),
+			    expr_hash_table.n_elems);
       compute_code_hoist_data ();
       changed = hoist_code ();
       free_code_hoist_mem ();
diff --git a/gcc/graph.c b/gcc/graph.c
index 3f02cab..6c405d8 100644
--- a/gcc/graph.c
+++ b/gcc/graph.c
@@ -157,7 +157,7 @@ draw_cfg_nodes_no_loops (pretty_printer *pp, struct function *fun)
   int i, n;
   sbitmap visited;
 
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (visited);
 
   n = pre_and_rev_post_order_compute_fn (fun, NULL, rpo, true);
diff --git a/gcc/graphite-sese-to-poly.c b/gcc/graphite-sese-to-poly.c
index 0eebbab..975db63 100644
--- a/gcc/graphite-sese-to-poly.c
+++ b/gcc/graphite-sese-to-poly.c
@@ -423,7 +423,7 @@ build_scop_bbs_1 (scop_p scop, sbitmap visited, basic_block bb)
 static void
 build_scop_bbs (scop_p scop)
 {
-  sbitmap visited = sbitmap_alloc (last_basic_block);
+  sbitmap visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
   sese region = SCOP_REGION (scop);
 
   bitmap_clear (visited);
diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index 8d47eb9..d5e3309 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -8075,7 +8075,7 @@ unlink_bb_notes (basic_block first, basic_block last)
   if (first == last)
     return;
 
-  bb_header = XNEWVEC (rtx, last_basic_block);
+  bb_header = XNEWVEC (rtx, last_basic_block_for_fn (cfun));
 
   /* Make a sentinel.  */
   if (last->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
diff --git a/gcc/ipa-split.c b/gcc/ipa-split.c
index f8fa0ee..d5dfb8d 100644
--- a/gcc/ipa-split.c
+++ b/gcc/ipa-split.c
@@ -1593,7 +1593,7 @@ execute_split_functions (void)
   calculate_dominance_info (CDI_DOMINATORS);
 
   /* Compute local info about basic blocks and determine function size/time.  */
-  bb_info_vec.safe_grow_cleared (last_basic_block + 1);
+  bb_info_vec.safe_grow_cleared (last_basic_block_for_fn (cfun) + 1);
   memset (&best_split_point, 0, sizeof (best_split_point));
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/ira-build.c b/gcc/ira-build.c
index 09e22d7..f9258ee 100644
--- a/gcc/ira-build.c
+++ b/gcc/ira-build.c
@@ -138,9 +138,10 @@ create_loop_tree_nodes (void)
 
   ira_bb_nodes
     = ((struct ira_loop_tree_node *)
-       ira_allocate (sizeof (struct ira_loop_tree_node) * last_basic_block));
-  last_basic_block_before_change = last_basic_block;
-  for (i = 0; i < (unsigned int) last_basic_block; i++)
+       ira_allocate (sizeof (struct ira_loop_tree_node)
+		     * last_basic_block_for_fn (cfun)));
+  last_basic_block_before_change = last_basic_block_for_fn (cfun);
+  for (i = 0; i < (unsigned int) last_basic_block_for_fn (cfun); i++)
     {
       ira_bb_nodes[i].regno_allocno_map = NULL;
       memset (ira_bb_nodes[i].reg_pressure, 0,
@@ -2605,8 +2606,10 @@ remove_unnecessary_regions (bool all_p)
     mark_all_loops_for_removal ();
   else
     mark_loops_for_removal ();
-  children_vec.create (last_basic_block + number_of_loops (cfun));
-  removed_loop_vec.create (last_basic_block + number_of_loops (cfun));
+  children_vec.create (last_basic_block_for_fn (cfun)
+		       + number_of_loops (cfun));
+  removed_loop_vec.create (last_basic_block_for_fn (cfun)
+			   + number_of_loops (cfun));
   remove_uneccesary_loop_nodes_from_loop_tree (ira_loop_tree_root);
   children_vec.release ();
   if (all_p)
diff --git a/gcc/ira-emit.c b/gcc/ira-emit.c
index 198fa47..d59461b 100644
--- a/gcc/ira-emit.c
+++ b/gcc/ira-emit.c
@@ -1239,15 +1239,17 @@ ira_emit (bool loops_p)
   edge e;
   ira_allocno_t a;
   ira_allocno_iterator ai;
+  size_t sz;
 
   FOR_EACH_ALLOCNO (a, ai)
     ALLOCNO_EMIT_DATA (a)->reg = regno_reg_rtx[ALLOCNO_REGNO (a)];
   if (! loops_p)
     return;
-  at_bb_start = (move_t *) ira_allocate (sizeof (move_t) * last_basic_block);
-  memset (at_bb_start, 0, sizeof (move_t) * last_basic_block);
-  at_bb_end = (move_t *) ira_allocate (sizeof (move_t) * last_basic_block);
-  memset (at_bb_end, 0, sizeof (move_t) * last_basic_block);
+  sz = sizeof (move_t) * last_basic_block_for_fn (cfun);
+  at_bb_start = (move_t *) ira_allocate (sz);
+  memset (at_bb_start, 0, sz);
+  at_bb_end = (move_t *) ira_allocate (sz);
+  memset (at_bb_end, 0, sz);
   local_allocno_bitmap = ira_allocate_bitmap ();
   used_regno_bitmap = ira_allocate_bitmap ();
   renamed_regno_bitmap = ira_allocate_bitmap ();
diff --git a/gcc/ira.c b/gcc/ira.c
index b3477ae..ae35035 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -4507,12 +4507,15 @@ find_moveable_pseudos (void)
   int *uid_luid = XNEWVEC (int, max_uid);
   rtx *closest_uses = XNEWVEC (rtx, max_regs);
   /* A set of registers which are live but not modified throughout a block.  */
-  bitmap_head *bb_transp_live = XNEWVEC (bitmap_head, last_basic_block);
+  bitmap_head *bb_transp_live = XNEWVEC (bitmap_head,
+					 last_basic_block_for_fn (cfun));
   /* A set of registers which only exist in a given basic block.  */
-  bitmap_head *bb_local = XNEWVEC (bitmap_head, last_basic_block);
+  bitmap_head *bb_local = XNEWVEC (bitmap_head,
+				   last_basic_block_for_fn (cfun));
   /* A set of registers which are set once, in an instruction that can be
      moved freely downwards, but are otherwise transparent to a block.  */
-  bitmap_head *bb_moveable_reg_sets = XNEWVEC (bitmap_head, last_basic_block);
+  bitmap_head *bb_moveable_reg_sets = XNEWVEC (bitmap_head,
+					       last_basic_block_for_fn (cfun));
   bitmap_head live, used, set, interesting, unusable_as_input;
   bitmap_iterator bi;
   bitmap_initialize (&interesting, 0);
@@ -5187,7 +5190,8 @@ ira (FILE *f)
      pseudos and 10K blocks or 100K pseudos and 1K blocks), we will
      use simplified and faster algorithms in LRA.  */
   lra_simple_p
-    = (ira_use_lra_p && max_reg_num () >= (1 << 26) / last_basic_block);
+    = (ira_use_lra_p
+       && max_reg_num () >= (1 << 26) / last_basic_block_for_fn (cfun));
   if (lra_simple_p)
     {
       /* It permits to skip live range splitting in LRA.  */
diff --git a/gcc/lcm.c b/gcc/lcm.c
index aa63c72..1129d6c 100644
--- a/gcc/lcm.c
+++ b/gcc/lcm.c
@@ -105,7 +105,7 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
 
   /* We want a maximal solution, so make an optimistic initialization of
      ANTIN.  */
-  bitmap_vector_ones (antin, last_basic_block);
+  bitmap_vector_ones (antin, last_basic_block_for_fn (cfun));
 
   /* Put every block on the worklist; this is necessary because of the
      optimistic initialization of ANTIN above.  */
@@ -330,10 +330,10 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
   /* Computation of insertion and deletion points requires computing LATERIN
      for the EXIT block.  We allocated an extra entry in the LATERIN array
      for just this purpose.  */
-  bitmap_ones (laterin[last_basic_block]);
+  bitmap_ones (laterin[last_basic_block_for_fn (cfun)]);
   FOR_EACH_EDGE (e, ei, EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
-    bitmap_and (laterin[last_basic_block],
-		     laterin[last_basic_block],
+    bitmap_and (laterin[last_basic_block_for_fn (cfun)],
+		     laterin[last_basic_block_for_fn (cfun)],
 		     later[(size_t) e->aux]);
 
   clear_aux_for_edges ();
@@ -359,7 +359,8 @@ compute_insert_delete (struct edge_list *edge_list, sbitmap *antloc,
       basic_block b = INDEX_EDGE_SUCC_BB (edge_list, x);
 
       if (b == EXIT_BLOCK_PTR_FOR_FN (cfun))
-	bitmap_and_compl (insert[x], later[x], laterin[last_basic_block]);
+	bitmap_and_compl (insert[x], later[x],
+			  laterin[last_basic_block_for_fn (cfun)]);
       else
 	bitmap_and_compl (insert[x], later[x], laterin[b->index]);
     }
@@ -389,29 +390,35 @@ pre_edge_lcm (int n_exprs, sbitmap *transp,
       fprintf (dump_file, "Edge List:\n");
       verify_edge_list (dump_file, edge_list);
       print_edge_list (dump_file, edge_list);
-      dump_bitmap_vector (dump_file, "transp", "", transp, last_basic_block);
-      dump_bitmap_vector (dump_file, "antloc", "", antloc, last_basic_block);
-      dump_bitmap_vector (dump_file, "avloc", "", avloc, last_basic_block);
-      dump_bitmap_vector (dump_file, "kill", "", kill, last_basic_block);
+      dump_bitmap_vector (dump_file, "transp", "", transp,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "antloc", "", antloc,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "avloc", "", avloc,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "kill", "", kill,
+			  last_basic_block_for_fn (cfun));
     }
 #endif
 
   /* Compute global availability.  */
-  avin = sbitmap_vector_alloc (last_basic_block, n_exprs);
-  avout = sbitmap_vector_alloc (last_basic_block, n_exprs);
+  avin = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
+  avout = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
   compute_available (avloc, kill, avout, avin);
   sbitmap_vector_free (avin);
 
   /* Compute global anticipatability.  */
-  antin = sbitmap_vector_alloc (last_basic_block, n_exprs);
-  antout = sbitmap_vector_alloc (last_basic_block, n_exprs);
+  antin = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
+  antout = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
   compute_antinout_edge (antloc, transp, antin, antout);
 
 #ifdef LCM_DEBUG_INFO
   if (dump_file)
     {
-      dump_bitmap_vector (dump_file, "antin", "", antin, last_basic_block);
-      dump_bitmap_vector (dump_file, "antout", "", antout, last_basic_block);
+      dump_bitmap_vector (dump_file, "antin", "", antin,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "antout", "", antout,
+			  last_basic_block_for_fn (cfun));
     }
 #endif
 
@@ -431,13 +438,15 @@ pre_edge_lcm (int n_exprs, sbitmap *transp,
   later = sbitmap_vector_alloc (num_edges, n_exprs);
 
   /* Allocate an extra element for the exit block in the laterin vector.  */
-  laterin = sbitmap_vector_alloc (last_basic_block + 1, n_exprs);
+  laterin = sbitmap_vector_alloc (last_basic_block_for_fn (cfun) + 1,
+				  n_exprs);
   compute_laterin (edge_list, earliest, antloc, later, laterin);
 
 #ifdef LCM_DEBUG_INFO
   if (dump_file)
     {
-      dump_bitmap_vector (dump_file, "laterin", "", laterin, last_basic_block + 1);
+      dump_bitmap_vector (dump_file, "laterin", "", laterin,
+			  last_basic_block_for_fn (cfun) + 1);
       dump_bitmap_vector (dump_file, "later", "", later, num_edges);
     }
 #endif
@@ -445,9 +454,9 @@ pre_edge_lcm (int n_exprs, sbitmap *transp,
   sbitmap_vector_free (earliest);
 
   *insert = sbitmap_vector_alloc (num_edges, n_exprs);
-  *del = sbitmap_vector_alloc (last_basic_block, n_exprs);
+  *del = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
   bitmap_vector_clear (*insert, num_edges);
-  bitmap_vector_clear (*del, last_basic_block);
+  bitmap_vector_clear (*del, last_basic_block_for_fn (cfun));
   compute_insert_delete (edge_list, antloc, later, laterin, *insert, *del);
 
   sbitmap_vector_free (laterin);
@@ -458,7 +467,7 @@ pre_edge_lcm (int n_exprs, sbitmap *transp,
     {
       dump_bitmap_vector (dump_file, "pre_insert_map", "", *insert, num_edges);
       dump_bitmap_vector (dump_file, "pre_delete_map", "", *del,
-			   last_basic_block);
+			  last_basic_block_for_fn (cfun));
     }
 #endif
 
@@ -484,7 +493,7 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
     XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
 
   /* We want a maximal solution.  */
-  bitmap_vector_ones (avout, last_basic_block);
+  bitmap_vector_ones (avout, last_basic_block_for_fn (cfun));
 
   /* Put every block on the worklist; this is necessary because of the
      optimistic initialization of AVOUT above.  */
@@ -666,10 +675,10 @@ compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
   /* Computation of insertion and deletion points requires computing NEAREROUT
      for the ENTRY block.  We allocated an extra entry in the NEAREROUT array
      for just this purpose.  */
-  bitmap_ones (nearerout[last_basic_block]);
+  bitmap_ones (nearerout[last_basic_block_for_fn (cfun)]);
   FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
-    bitmap_and (nearerout[last_basic_block],
-		     nearerout[last_basic_block],
+    bitmap_and (nearerout[last_basic_block_for_fn (cfun)],
+		     nearerout[last_basic_block_for_fn (cfun)],
 		     nearer[(size_t) e->aux]);
 
   clear_aux_for_edges ();
@@ -694,7 +703,8 @@ compute_rev_insert_delete (struct edge_list *edge_list, sbitmap *st_avloc,
     {
       basic_block b = INDEX_EDGE_PRED_BB (edge_list, x);
       if (b == ENTRY_BLOCK_PTR_FOR_FN (cfun))
-	bitmap_and_compl (insert[x], nearer[x], nearerout[last_basic_block]);
+	bitmap_and_compl (insert[x], nearer[x],
+			  nearerout[last_basic_block_for_fn (cfun)]);
       else
 	bitmap_and_compl (insert[x], nearer[x], nearerout[b->index]);
     }
@@ -719,15 +729,15 @@ pre_edge_rev_lcm (int n_exprs, sbitmap *transp,
   edge_list = create_edge_list ();
   num_edges = NUM_EDGES (edge_list);
 
-  st_antin = sbitmap_vector_alloc (last_basic_block, n_exprs);
-  st_antout = sbitmap_vector_alloc (last_basic_block, n_exprs);
-  bitmap_vector_clear (st_antin, last_basic_block);
-  bitmap_vector_clear (st_antout, last_basic_block);
+  st_antin = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
+  st_antout = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
+  bitmap_vector_clear (st_antin, last_basic_block_for_fn (cfun));
+  bitmap_vector_clear (st_antout, last_basic_block_for_fn (cfun));
   compute_antinout_edge (st_antloc, transp, st_antin, st_antout);
 
   /* Compute global anticipatability.  */
-  st_avout = sbitmap_vector_alloc (last_basic_block, n_exprs);
-  st_avin = sbitmap_vector_alloc (last_basic_block, n_exprs);
+  st_avout = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
+  st_avin = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
   compute_available (st_avloc, kill, st_avout, st_avin);
 
 #ifdef LCM_DEBUG_INFO
@@ -736,20 +746,26 @@ pre_edge_rev_lcm (int n_exprs, sbitmap *transp,
       fprintf (dump_file, "Edge List:\n");
       verify_edge_list (dump_file, edge_list);
       print_edge_list (dump_file, edge_list);
-      dump_bitmap_vector (dump_file, "transp", "", transp, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_avloc", "", st_avloc, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_antloc", "", st_antloc, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_antin", "", st_antin, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_antout", "", st_antout, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_kill", "", kill, last_basic_block);
+      dump_bitmap_vector (dump_file, "transp", "", transp,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_avloc", "", st_avloc,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_antloc", "", st_antloc,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_antin", "", st_antin,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_antout", "", st_antout,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_kill", "", kill,
+			  last_basic_block_for_fn (cfun));
     }
 #endif
 
 #ifdef LCM_DEBUG_INFO
   if (dump_file)
     {
-      dump_bitmap_vector (dump_file, "st_avout", "", st_avout, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_avin", "", st_avin, last_basic_block);
+      dump_bitmap_vector (dump_file, "st_avout", "", st_avout, last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_avin", "", st_avin, last_basic_block_for_fn (cfun));
     }
 #endif
 
@@ -772,14 +788,15 @@ pre_edge_rev_lcm (int n_exprs, sbitmap *transp,
   nearer = sbitmap_vector_alloc (num_edges, n_exprs);
 
   /* Allocate an extra element for the entry block.  */
-  nearerout = sbitmap_vector_alloc (last_basic_block + 1, n_exprs);
+  nearerout = sbitmap_vector_alloc (last_basic_block_for_fn (cfun) + 1,
+				    n_exprs);
   compute_nearerout (edge_list, farthest, st_avloc, nearer, nearerout);
 
 #ifdef LCM_DEBUG_INFO
   if (dump_file)
     {
       dump_bitmap_vector (dump_file, "nearerout", "", nearerout,
-			   last_basic_block + 1);
+			   last_basic_block_for_fn (cfun) + 1);
       dump_bitmap_vector (dump_file, "nearer", "", nearer, num_edges);
     }
 #endif
@@ -787,7 +804,7 @@ pre_edge_rev_lcm (int n_exprs, sbitmap *transp,
   sbitmap_vector_free (farthest);
 
   *insert = sbitmap_vector_alloc (num_edges, n_exprs);
-  *del = sbitmap_vector_alloc (last_basic_block, n_exprs);
+  *del = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_exprs);
   compute_rev_insert_delete (edge_list, st_avloc, nearer, nearerout,
 			     *insert, *del);
 
@@ -799,7 +816,7 @@ pre_edge_rev_lcm (int n_exprs, sbitmap *transp,
     {
       dump_bitmap_vector (dump_file, "pre_insert_map", "", *insert, num_edges);
       dump_bitmap_vector (dump_file, "pre_delete_map", "", *del,
-			   last_basic_block);
+			   last_basic_block_for_fn (cfun));
     }
 #endif
   return edge_list;
diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
index d1c7b9c..24ed83f 100644
--- a/gcc/loop-unroll.c
+++ b/gcc/loop-unroll.c
@@ -2007,7 +2007,7 @@ static void
 opt_info_start_duplication (struct opt_info *opt_info)
 {
   if (opt_info)
-    opt_info->first_new_block = last_basic_block;
+    opt_info->first_new_block = last_basic_block_for_fn (cfun);
 }
 
 /* Determine the number of iterations between initialization of the base
@@ -2368,7 +2368,9 @@ apply_opt_in_copies (struct opt_info *opt_info,
     for (ivts = opt_info->iv_to_split_head; ivts; ivts = ivts->next)
       allocate_basic_variable (ivts);
 
-  for (i = opt_info->first_new_block; i < (unsigned) last_basic_block; i++)
+  for (i = opt_info->first_new_block;
+       i < (unsigned) last_basic_block_for_fn (cfun);
+       i++)
     {
       bb = BASIC_BLOCK_FOR_FN (cfun, i);
       orig_bb = get_bb_original (bb);
@@ -2444,7 +2446,9 @@ apply_opt_in_copies (struct opt_info *opt_info,
   /* Rewrite also the original loop body.  Find them as originals of the blocks
      in the last copied iteration, i.e. those that have
      get_bb_copy (get_bb_original (bb)) == bb.  */
-  for (i = opt_info->first_new_block; i < (unsigned) last_basic_block; i++)
+  for (i = opt_info->first_new_block;
+       i < (unsigned) last_basic_block_for_fn (cfun);
+       i++)
     {
       bb = BASIC_BLOCK_FOR_FN (cfun, i);
       orig_bb = get_bb_original (bb);
diff --git a/gcc/lower-subreg.c b/gcc/lower-subreg.c
index 6c9d622..60c47b9 100644
--- a/gcc/lower-subreg.c
+++ b/gcc/lower-subreg.c
@@ -1537,7 +1537,7 @@ decompose_multiword_subregs (bool decompose_copies)
 
       propagate_pseudo_copies ();
 
-      sub_blocks = sbitmap_alloc (last_basic_block);
+      sub_blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
       bitmap_clear (sub_blocks);
 
       EXECUTE_IF_SET_IN_BITMAP (decomposable_context, 0, regno, iter)
diff --git a/gcc/lra-lives.c b/gcc/lra-lives.c
index d2082fe..a677f86 100644
--- a/gcc/lra-lives.c
+++ b/gcc/lra-lives.c
@@ -996,7 +996,7 @@ lra_create_live_ranges (bool all_p)
   curr_point = 0;
   point_freq_vec.create (get_max_uid () * 2);
   lra_point_freq = point_freq_vec.address ();
-  int *post_order_rev_cfg = XNEWVEC (int, last_basic_block);
+  int *post_order_rev_cfg = XNEWVEC (int, last_basic_block_for_fn (cfun));
   int n_blocks_inverted = inverted_post_order_compute (post_order_rev_cfg);
   lra_assert (n_blocks_inverted == n_basic_blocks_for_fn (cfun));
   for (i = n_blocks_inverted - 1; i >= 0; --i)
diff --git a/gcc/lra.c b/gcc/lra.c
index d21d864..50a0786 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -2422,7 +2422,7 @@ lra (FILE *f)
   if (cfun->can_throw_non_call_exceptions)
     {
       sbitmap blocks;
-      blocks = sbitmap_alloc (last_basic_block);
+      blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
       bitmap_ones (blocks);
       find_many_sub_basic_blocks (blocks);
       sbitmap_free (blocks);
diff --git a/gcc/mode-switching.c b/gcc/mode-switching.c
index ed45094..a9e5069 100644
--- a/gcc/mode-switching.c
+++ b/gcc/mode-switching.c
@@ -480,7 +480,8 @@ optimize_mode_switching (void)
 	entry_exit_extra = 3;
 #endif
 	bb_info[n_entities]
-	  = XCNEWVEC (struct bb_info, last_basic_block + entry_exit_extra);
+	  = XCNEWVEC (struct bb_info,
+		      last_basic_block_for_fn (cfun) + entry_exit_extra);
 	entity_map[n_entities++] = e;
 	if (num_modes[e] > max_num_modes)
 	  max_num_modes = num_modes[e];
@@ -500,11 +501,11 @@ optimize_mode_switching (void)
 
   /* Create the bitmap vectors.  */
 
-  antic = sbitmap_vector_alloc (last_basic_block, n_entities);
-  transp = sbitmap_vector_alloc (last_basic_block, n_entities);
-  comp = sbitmap_vector_alloc (last_basic_block, n_entities);
+  antic = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_entities);
+  transp = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_entities);
+  comp = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_entities);
 
-  bitmap_vector_ones (transp, last_basic_block);
+  bitmap_vector_ones (transp, last_basic_block_for_fn (cfun));
 
   for (j = n_entities - 1; j >= 0; j--)
     {
@@ -608,7 +609,7 @@ optimize_mode_switching (void)
 #endif /* NORMAL_MODE */
     }
 
-  kill = sbitmap_vector_alloc (last_basic_block, n_entities);
+  kill = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), n_entities);
   for (i = 0; i < max_num_modes; i++)
     {
       int current_mode[N_ENTITIES];
@@ -616,8 +617,8 @@ optimize_mode_switching (void)
       sbitmap *insert;
 
       /* Set the anticipatable and computing arrays.  */
-      bitmap_vector_clear (antic, last_basic_block);
-      bitmap_vector_clear (comp, last_basic_block);
+      bitmap_vector_clear (antic, last_basic_block_for_fn (cfun));
+      bitmap_vector_clear (comp, last_basic_block_for_fn (cfun));
       for (j = n_entities - 1; j >= 0; j--)
 	{
 	  int m = current_mode[j] = MODE_PRIORITY_TO_MODE (entity_map[j], i);
diff --git a/gcc/recog.c b/gcc/recog.c
index 7f59756..c59aa0e 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -2898,7 +2898,7 @@ split_all_insns (void)
   bool changed;
   basic_block bb;
 
-  blocks = sbitmap_alloc (last_basic_block);
+  blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (blocks);
   changed = false;
 
diff --git a/gcc/regcprop.c b/gcc/regcprop.c
index 9b52a63..0438875 100644
--- a/gcc/regcprop.c
+++ b/gcc/regcprop.c
@@ -1066,9 +1066,9 @@ copyprop_hardreg_forward (void)
   sbitmap visited;
   bool analyze_called = false;
 
-  all_vd = XNEWVEC (struct value_data, last_basic_block);
+  all_vd = XNEWVEC (struct value_data, last_basic_block_for_fn (cfun));
 
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (visited);
 
   if (MAY_HAVE_DEBUG_INSNS)
diff --git a/gcc/regrename.c b/gcc/regrename.c
index ac8b0f3..3c242fb 100644
--- a/gcc/regrename.c
+++ b/gcc/regrename.c
@@ -668,7 +668,7 @@ regrename_analyze (bitmap bb_mask)
   int n_bbs;
   int *inverse_postorder;
 
-  inverse_postorder = XNEWVEC (int, last_basic_block);
+  inverse_postorder = XNEWVEC (int, last_basic_block_for_fn (cfun));
   n_bbs = pre_and_rev_post_order_compute (NULL, inverse_postorder, false);
 
   /* Gather some information about the blocks in this function.  */
diff --git a/gcc/reload1.c b/gcc/reload1.c
index 6864ec1..15c6db5 100644
--- a/gcc/reload1.c
+++ b/gcc/reload1.c
@@ -1283,7 +1283,7 @@ reload (rtx first, int global)
   if (cfun->can_throw_non_call_exceptions)
     {
       sbitmap blocks;
-      blocks = sbitmap_alloc (last_basic_block);
+      blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
       bitmap_ones (blocks);
       find_many_sub_basic_blocks (blocks);
       sbitmap_free (blocks);
diff --git a/gcc/resource.c b/gcc/resource.c
index 3106a09..861d969 100644
--- a/gcc/resource.c
+++ b/gcc/resource.c
@@ -1216,7 +1216,7 @@ init_resource_info (rtx epilogue_insn)
 
   /* Allocate and initialize the tables used by mark_target_live_regs.  */
   target_hash_table = XCNEWVEC (struct target_info *, TARGET_HASH_PRIME);
-  bb_ticks = XCNEWVEC (int, last_basic_block);
+  bb_ticks = XCNEWVEC (int, last_basic_block_for_fn (cfun));
 
   /* Set the BLOCK_FOR_INSN of each label that starts a basic block.  */
   FOR_EACH_BB (bb)
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index 2d8b939..a85ee5b 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -642,23 +642,23 @@ haifa_find_rgns (void)
      STACK, SP and DFS_NR are only used during the first traversal.  */
 
   /* Allocate and initialize variables for the first traversal.  */
-  max_hdr = XNEWVEC (int, last_basic_block);
-  dfs_nr = XCNEWVEC (int, last_basic_block);
+  max_hdr = XNEWVEC (int, last_basic_block_for_fn (cfun));
+  dfs_nr = XCNEWVEC (int, last_basic_block_for_fn (cfun));
   stack = XNEWVEC (edge_iterator, n_edges_for_fn (cfun));
 
-  inner = sbitmap_alloc (last_basic_block);
+  inner = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_ones (inner);
 
-  header = sbitmap_alloc (last_basic_block);
+  header = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (header);
 
-  in_queue = sbitmap_alloc (last_basic_block);
+  in_queue = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (in_queue);
 
-  in_stack = sbitmap_alloc (last_basic_block);
+  in_stack = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (in_stack);
 
-  for (i = 0; i < last_basic_block; i++)
+  for (i = 0; i < last_basic_block_for_fn (cfun); i++)
     max_hdr[i] = -1;
 
   #define EDGE_PASSED(E) (ei_end_p ((E)) || ei_edge ((E))->aux)
@@ -799,8 +799,9 @@ haifa_find_rgns (void)
       extend_regions_p = PARAM_VALUE (PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS) > 0;
       if (extend_regions_p)
         {
-          degree1 = XNEWVEC (int, last_basic_block);
-          extended_rgn_header = sbitmap_alloc (last_basic_block);
+          degree1 = XNEWVEC (int, last_basic_block_for_fn (cfun));
+          extended_rgn_header =
+	    sbitmap_alloc (last_basic_block_for_fn (cfun));
           bitmap_clear (extended_rgn_header);
 	}
 
@@ -854,7 +855,8 @@ haifa_find_rgns (void)
                 /* We save degree in case when we meet a too_large region
 		   and cancel it.  We need a correct degree later when
                    calling extend_rgns.  */
-                memcpy (degree1, degree, last_basic_block * sizeof (int));
+                memcpy (degree1, degree,
+			last_basic_block_for_fn (cfun) * sizeof (int));
 
 	      /* Decrease degree of all I's successors for topological
 		 ordering.  */
@@ -1161,9 +1163,9 @@ extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 
   max_iter = PARAM_VALUE (PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS);
 
-  max_hdr = XNEWVEC (int, last_basic_block);
+  max_hdr = XNEWVEC (int, last_basic_block_for_fn (cfun));
 
-  order = XNEWVEC (int, last_basic_block);
+  order = XNEWVEC (int, last_basic_block_for_fn (cfun));
   post_order_compute (order, false, false);
 
   for (i = nblocks - 1; i >= 0; i--)
@@ -1514,7 +1516,7 @@ compute_trg_info (int trg)
   sp->is_speculative = 0;
   sp->src_prob = REG_BR_PROB_BASE;
 
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
 
   for (i = trg + 1; i < current_nr_blocks; i++)
     {
@@ -2936,11 +2938,11 @@ static void
 realloc_bb_state_array (int saved_last_basic_block)
 {
   char *old_bb_state_array = bb_state_array;
-  size_t lbb = (size_t) last_basic_block;
+  size_t lbb = (size_t) last_basic_block_for_fn (cfun);
   size_t slbb = (size_t) saved_last_basic_block;
 
   /* Nothing to do if nothing changed since the last time this was called.  */
-  if (saved_last_basic_block == last_basic_block)
+  if (saved_last_basic_block == last_basic_block_for_fn (cfun))
     return;
 
   /* The selective scheduler doesn't use the state arrays.  */
@@ -3060,7 +3062,7 @@ schedule_region (int rgn)
       if (dbg_cnt (sched_block))
         {
 	  edge f;
-	  int saved_last_basic_block = last_basic_block;
+	  int saved_last_basic_block = last_basic_block_for_fn (cfun);
 
 	  schedule_block (&curr_bb, bb_state[first_bb->index]);
 	  gcc_assert (EBB_FIRST_BB (bb) == first_bb);
@@ -3430,9 +3432,12 @@ void
 extend_regions (void)
 {
   rgn_table = XRESIZEVEC (region, rgn_table, n_basic_blocks_for_fn (cfun));
-  rgn_bb_table = XRESIZEVEC (int, rgn_bb_table, n_basic_blocks_for_fn (cfun));
-  block_to_bb = XRESIZEVEC (int, block_to_bb, last_basic_block);
-  containing_rgn = XRESIZEVEC (int, containing_rgn, last_basic_block);
+  rgn_bb_table = XRESIZEVEC (int, rgn_bb_table,
+			     n_basic_blocks_for_fn (cfun));
+  block_to_bb = XRESIZEVEC (int, block_to_bb,
+			    last_basic_block_for_fn (cfun));
+  containing_rgn = XRESIZEVEC (int, containing_rgn,
+			       last_basic_block_for_fn (cfun));
 }
 
 void
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index da84cce..f7cc9ec 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -4095,14 +4095,14 @@ get_seqno_by_preds (rtx insn)
 void
 sel_extend_global_bb_info (void)
 {
-  sel_global_bb_info.safe_grow_cleared (last_basic_block);
+  sel_global_bb_info.safe_grow_cleared (last_basic_block_for_fn (cfun));
 }
 
 /* Extend region-scope data structures for basic blocks.  */
 static void
 extend_region_bb_info (void)
 {
-  sel_region_bb_info.safe_grow_cleared (last_basic_block);
+  sel_region_bb_info.safe_grow_cleared (last_basic_block_for_fn (cfun));
 }
 
 /* Extend all data structures to fit for all basic blocks.  */
@@ -4905,9 +4905,10 @@ recompute_rev_top_order (void)
   int *postorder;
   int n_blocks, i;
 
-  if (!rev_top_order_index || rev_top_order_index_len < last_basic_block)
+  if (!rev_top_order_index
+      || rev_top_order_index_len < last_basic_block_for_fn (cfun))
     {
-      rev_top_order_index_len = last_basic_block;
+      rev_top_order_index_len = last_basic_block_for_fn (cfun);
       rev_top_order_index = XRESIZEVEC (int, rev_top_order_index,
                                         rev_top_order_index_len);
     }
@@ -6079,7 +6080,7 @@ sel_init_pipelining (void)
 		       | LOOPS_HAVE_MARKED_IRREDUCIBLE_REGIONS);
   current_loop_nest = NULL;
 
-  bbs_in_loop_rgns = sbitmap_alloc (last_basic_block);
+  bbs_in_loop_rgns = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (bbs_in_loop_rgns);
 
   recompute_rev_top_order ();
@@ -6145,13 +6146,13 @@ make_regions_from_the_rest (void)
   /* LOOP_HDR[I] == -1 if I-th bb doesn't belong to any loop,
      LOOP_HDR[I] == LOOP_HDR[J] iff basic blocks I and J reside within the same
      loop.  */
-  loop_hdr = XNEWVEC (int, last_basic_block);
-  degree = XCNEWVEC (int, last_basic_block);
+  loop_hdr = XNEWVEC (int, last_basic_block_for_fn (cfun));
+  degree = XCNEWVEC (int, last_basic_block_for_fn (cfun));
 
 
   /* For each basic block that belongs to some loop assign the number
      of innermost loop it belongs to.  */
-  for (i = 0; i < last_basic_block; i++)
+  for (i = 0; i < last_basic_block_for_fn (cfun); i++)
     loop_hdr[i] = -1;
 
   FOR_EACH_BB (bb)
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index 378d6c7..808b0a7 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -844,7 +844,7 @@ remove_reachable_equiv_notes (basic_block bb, struct st_expr *smexpr)
   edge_iterator *stack, ei;
   int sp;
   edge act;
-  sbitmap visited = sbitmap_alloc (last_basic_block);
+  sbitmap visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
   rtx last, insn, note;
   rtx mem = smexpr->pattern;
 
@@ -1016,11 +1016,13 @@ build_store_vectors (void)
 
   /* Build the gen_vector. This is any store in the table which is not killed
      by aliasing later in its block.  */
-  st_avloc = sbitmap_vector_alloc (last_basic_block, num_stores);
-  bitmap_vector_clear (st_avloc, last_basic_block);
+  st_avloc = sbitmap_vector_alloc (last_basic_block_for_fn (cfun),
+				   num_stores);
+  bitmap_vector_clear (st_avloc, last_basic_block_for_fn (cfun));
 
-  st_antloc = sbitmap_vector_alloc (last_basic_block, num_stores);
-  bitmap_vector_clear (st_antloc, last_basic_block);
+  st_antloc = sbitmap_vector_alloc (last_basic_block_for_fn (cfun),
+				    num_stores);
+  bitmap_vector_clear (st_antloc, last_basic_block_for_fn (cfun));
 
   for (ptr = first_st_expr (); ptr != NULL; ptr = next_st_expr (ptr))
     {
@@ -1052,11 +1054,11 @@ build_store_vectors (void)
 	}
     }
 
-  st_kill = sbitmap_vector_alloc (last_basic_block, num_stores);
-  bitmap_vector_clear (st_kill, last_basic_block);
+  st_kill = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), num_stores);
+  bitmap_vector_clear (st_kill, last_basic_block_for_fn (cfun));
 
-  st_transp = sbitmap_vector_alloc (last_basic_block, num_stores);
-  bitmap_vector_clear (st_transp, last_basic_block);
+  st_transp = sbitmap_vector_alloc (last_basic_block_for_fn (cfun), num_stores);
+  bitmap_vector_clear (st_transp, last_basic_block_for_fn (cfun));
   regs_set_in_block = XNEWVEC (int, max_gcse_regno);
 
   FOR_EACH_BB (bb)
@@ -1095,10 +1097,14 @@ build_store_vectors (void)
 
   if (dump_file)
     {
-      dump_bitmap_vector (dump_file, "st_antloc", "", st_antloc, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_kill", "", st_kill, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_transp", "", st_transp, last_basic_block);
-      dump_bitmap_vector (dump_file, "st_avloc", "", st_avloc, last_basic_block);
+      dump_bitmap_vector (dump_file, "st_antloc", "", st_antloc,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_kill", "", st_kill,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_transp", "", st_transp,
+			  last_basic_block_for_fn (cfun));
+      dump_bitmap_vector (dump_file, "st_avloc", "", st_avloc,
+			  last_basic_block_for_fn (cfun));
     }
 }
 
diff --git a/gcc/tracer.c b/gcc/tracer.c
index 99689500..de6877a 100644
--- a/gcc/tracer.c
+++ b/gcc/tracer.c
@@ -230,9 +230,9 @@ find_trace (basic_block bb, basic_block *trace)
 static bool
 tail_duplicate (void)
 {
-  fibnode_t *blocks = XCNEWVEC (fibnode_t, last_basic_block);
+  fibnode_t *blocks = XCNEWVEC (fibnode_t, last_basic_block_for_fn (cfun));
   basic_block *trace = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
-  int *counts = XNEWVEC (int, last_basic_block);
+  int *counts = XNEWVEC (int, last_basic_block_for_fn (cfun));
   int ninsns = 0, nduplicated = 0;
   gcov_type weighted_insns = 0, traced_insns = 0;
   fibheap_t heap = fibheap_new ();
@@ -243,7 +243,7 @@ tail_duplicate (void)
 
   /* Create an oversized sbitmap to reduce the chance that we need to
      resize it.  */
-  bb_seen = sbitmap_alloc (last_basic_block * 2);
+  bb_seen = sbitmap_alloc (last_basic_block_for_fn (cfun) * 2);
   bitmap_clear (bb_seen);
   initialize_original_copy_tables ();
 
diff --git a/gcc/trans-mem.c b/gcc/trans-mem.c
index 39715b8..2a6597d 100644
--- a/gcc/trans-mem.c
+++ b/gcc/trans-mem.c
@@ -1956,7 +1956,7 @@ tm_region_init (struct tm_region *region)
   /* We could store this information in bb->aux, but we may get called
      through get_all_tm_blocks() from another pass that may be already
      using bb->aux.  */
-  bb_regions.safe_grow_cleared (last_basic_block);
+  bb_regions.safe_grow_cleared (last_basic_block_for_fn (cfun));
 
   queue.safe_push (bb);
   bb_regions[bb->index] = region;
@@ -2628,7 +2628,7 @@ static vec<tm_region_p>
 get_bb_regions_instrumented (bool traverse_clones,
 			     bool include_uninstrumented_p)
 {
-  unsigned n = last_basic_block;
+  unsigned n = last_basic_block_for_fn (cfun);
   struct bb2reg_stuff stuff;
   vec<tm_region_p> ret;
 
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 57d6487..ec365b5 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -597,7 +597,7 @@ create_bb (void *h, void *e, basic_block after)
      not have to clear the newly allocated basic block here.  */
   bb = alloc_block ();
 
-  bb->index = last_basic_block;
+  bb->index = last_basic_block_for_fn (cfun);
   bb->flags = BB_NEW;
   set_bb_seq (bb, h ? (gimple_seq) h : NULL);
 
@@ -605,17 +605,20 @@ create_bb (void *h, void *e, basic_block after)
   link_block (bb, after);
 
   /* Grow the basic block array if needed.  */
-  if ((size_t) last_basic_block == basic_block_info_for_fn (cfun)->length ())
+  if ((size_t) last_basic_block_for_fn (cfun)
+      == basic_block_info_for_fn (cfun)->length ())
     {
-      size_t new_size = last_basic_block + (last_basic_block + 3) / 4;
+      size_t new_size =
+	(last_basic_block_for_fn (cfun)
+	 + (last_basic_block_for_fn (cfun) + 3) / 4);
       vec_safe_grow_cleared (basic_block_info_for_fn (cfun), new_size);
     }
 
   /* Add the newly created block to the array.  */
-  SET_BASIC_BLOCK_FOR_FN (cfun, last_basic_block, bb);
+  SET_BASIC_BLOCK_FOR_FN (cfun, last_basic_block_for_fn (cfun), bb);
 
   n_basic_blocks_for_fn (cfun)++;
-  last_basic_block++;
+  last_basic_block_for_fn (cfun)++;
 
   return bb;
 }
@@ -1228,7 +1231,7 @@ void
 cleanup_dead_labels (void)
 {
   basic_block bb;
-  label_for_bb = XCNEWVEC (struct label_record, last_basic_block);
+  label_for_bb = XCNEWVEC (struct label_record, last_basic_block_for_fn (cfun));
 
   /* Find a suitable label for each block.  We use the first user-defined
      label if there is one, or otherwise just the first label we see.  */
@@ -2116,7 +2119,7 @@ gimple_dump_cfg (FILE *file, int flags)
       dump_function_header (file, current_function_decl, flags);
       fprintf (file, ";; \n%d basic blocks, %d edges, last basic block %d.\n\n",
 	       n_basic_blocks_for_fn (cfun), n_edges_for_fn (cfun),
-	       last_basic_block);
+	       last_basic_block_for_fn (cfun));
 
       brief_dump_cfg (file, flags | TDF_COMMENT);
       fprintf (file, "\n");
@@ -7430,7 +7433,7 @@ gimple_flow_call_edges_add (sbitmap blocks)
 {
   int i;
   int blocks_split = 0;
-  int last_bb = last_basic_block;
+  int last_bb = last_basic_block_for_fn (cfun);
   bool check_last_block = false;
 
   if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c
index 76d9749..50b4a68 100644
--- a/gcc/tree-cfgcleanup.c
+++ b/gcc/tree-cfgcleanup.c
@@ -585,7 +585,7 @@ split_bbs_on_noreturn_calls (void)
 	   BB is present in the cfg.  */
 	if (bb == NULL
 	    || bb->index < NUM_FIXED_BLOCKS
-	    || bb->index >= last_basic_block
+	    || bb->index >= last_basic_block_for_fn (cfun)
 	    || BASIC_BLOCK_FOR_FN (cfun, bb->index) != bb
 	    || !gimple_call_noreturn_p (stmt))
 	  continue;
@@ -642,7 +642,7 @@ cleanup_tree_cfg_1 (void)
 
   /* Start by iterating over all basic blocks.  We cannot use FOR_EACH_BB,
      since the basic blocks may get removed.  */
-  n = last_basic_block;
+  n = last_basic_block_for_fn (cfun);
   for (i = NUM_FIXED_BLOCKS; i < n; i++)
     {
       bb = BASIC_BLOCK_FOR_FN (cfun, i);
diff --git a/gcc/tree-complex.c b/gcc/tree-complex.c
index 80a978e..ff5ccab 100644
--- a/gcc/tree-complex.c
+++ b/gcc/tree-complex.c
@@ -1636,7 +1636,7 @@ tree_lower_complex (void)
   update_parameter_components ();
 
   /* ??? Ideally we'd traverse the blocks in breadth-first order.  */
-  old_last_basic_block = last_basic_block;
+  old_last_basic_block = last_basic_block_for_fn (cfun);
   FOR_EACH_BB (bb)
     {
       if (bb->index >= old_last_basic_block)
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index fd7eedb..ed06cb9 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -2488,7 +2488,7 @@ copy_cfg_body (copy_body_data * id, gcov_type count, int frequency_scale,
 	new_bb->loop_father = entry_block_map->loop_father;
       }
 
-  last = last_basic_block;
+  last = last_basic_block_for_fn (cfun);
 
   /* Now that we've duplicated the blocks, duplicate their edges.  */
   bool can_make_abormal_goto
@@ -2544,7 +2544,7 @@ copy_cfg_body (copy_body_data * id, gcov_type count, int frequency_scale,
 
   /* Zero out AUX fields of newly created block during EH edge
      insertion. */
-  for (; last < last_basic_block; last++)
+  for (; last < last_basic_block_for_fn (cfun); last++)
     {
       if (need_debug_cleanup)
 	maybe_move_debug_stmts_to_successors (id,
diff --git a/gcc/tree-into-ssa.c b/gcc/tree-into-ssa.c
index ac10440..b6d3dd7 100644
--- a/gcc/tree-into-ssa.c
+++ b/gcc/tree-into-ssa.c
@@ -964,7 +964,7 @@ mark_phi_for_rewrite (basic_block bb, gimple phi)
 
   bitmap_set_bit (blocks_with_phis_to_rewrite, idx);
 
-  n = (unsigned) last_basic_block + 1;
+  n = (unsigned) last_basic_block_for_fn (cfun) + 1;
   if (phis_to_rewrite.length () < n)
     phis_to_rewrite.safe_grow_cleared (n);
 
@@ -2315,11 +2315,11 @@ rewrite_into_ssa (void)
   /* Initialize the set of interesting blocks.  The callback
      mark_def_sites will add to this set those blocks that the renamer
      should process.  */
-  interesting_blocks = sbitmap_alloc (last_basic_block);
+  interesting_blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (interesting_blocks);
 
   /* Initialize dominance frontier.  */
-  dfs = XNEWVEC (bitmap_head, last_basic_block);
+  dfs = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
   FOR_EACH_BB (bb)
     bitmap_initialize (&dfs[bb->index], &bitmap_default_obstack);
 
@@ -2635,7 +2635,7 @@ prepare_def_site_for (tree name, bool insert_phi_p)
   bb = gimple_bb (stmt);
   if (bb)
     {
-      gcc_checking_assert (bb->index < last_basic_block);
+      gcc_checking_assert (bb->index < last_basic_block_for_fn (cfun));
       mark_block_for_update (bb);
       mark_def_interesting (name, stmt, bb, insert_phi_p);
     }
@@ -3185,7 +3185,7 @@ update_ssa (unsigned update_flags)
 
   blocks_with_phis_to_rewrite = BITMAP_ALLOC (NULL);
   if (!phis_to_rewrite.exists ())
-    phis_to_rewrite.create (last_basic_block + 1);
+    phis_to_rewrite.create (last_basic_block_for_fn (cfun) + 1);
   blocks_to_update = BITMAP_ALLOC (NULL);
 
   /* Ensure that the dominance information is up-to-date.  */
@@ -3269,7 +3269,7 @@ update_ssa (unsigned update_flags)
 
       /* If the caller requested PHI nodes to be added, compute
 	 dominance frontiers.  */
-      dfs = XNEWVEC (bitmap_head, last_basic_block);
+      dfs = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
       FOR_EACH_BB (bb)
 	bitmap_initialize (&dfs[bb->index], &bitmap_default_obstack);
       compute_dominance_frontiers (dfs);
@@ -3317,7 +3317,7 @@ update_ssa (unsigned update_flags)
     get_var_info (sym)->info.current_def = NULL_TREE;
 
   /* Now start the renaming process at START_BB.  */
-  interesting_blocks = sbitmap_alloc (last_basic_block);
+  interesting_blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (interesting_blocks);
   EXECUTE_IF_SET_IN_BITMAP (blocks_to_update, 0, i, bi)
     bitmap_set_bit (interesting_blocks, i);
@@ -3340,9 +3340,10 @@ update_ssa (unsigned update_flags)
       c = 0;
       EXECUTE_IF_SET_IN_BITMAP (blocks_to_update, 0, i, bi)
 	c++;
-      fprintf (dump_file, "Number of blocks in CFG: %d\n", last_basic_block);
+      fprintf (dump_file, "Number of blocks in CFG: %d\n",
+	       last_basic_block_for_fn (cfun));
       fprintf (dump_file, "Number of blocks to update: %d (%3.0f%%)\n",
-	       c, PERCENT (c, last_basic_block));
+	       c, PERCENT (c, last_basic_block_for_fn (cfun)));
 
       if (dump_flags & TDF_DETAILS)
 	{
diff --git a/gcc/tree-ssa-dce.c b/gcc/tree-ssa-dce.c
index 8fc6fce..701dd44 100644
--- a/gcc/tree-ssa-dce.c
+++ b/gcc/tree-ssa-dce.c
@@ -1364,9 +1364,9 @@ tree_dce_init (bool aggressive)
 
   if (aggressive)
     {
-      last_stmt_necessary = sbitmap_alloc (last_basic_block);
+      last_stmt_necessary = sbitmap_alloc (last_basic_block_for_fn (cfun));
       bitmap_clear (last_stmt_necessary);
-      bb_contains_live_stmts = sbitmap_alloc (last_basic_block);
+      bb_contains_live_stmts = sbitmap_alloc (last_basic_block_for_fn (cfun));
       bitmap_clear (bb_contains_live_stmts);
     }
 
@@ -1432,7 +1432,8 @@ perform_tree_ssa_dce (bool aggressive)
       calculate_dominance_info (CDI_POST_DOMINATORS);
       cd = new control_dependences (create_edge_list ());
 
-      visited_control_parents = sbitmap_alloc (last_basic_block);
+      visited_control_parents =
+	sbitmap_alloc (last_basic_block_for_fn (cfun));
       bitmap_clear (visited_control_parents);
 
       mark_dfs_back_edges ();
diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c
index ebdf511..6cf60be 100644
--- a/gcc/tree-ssa-dom.c
+++ b/gcc/tree-ssa-dom.c
@@ -1793,7 +1793,7 @@ record_edge_info (basic_block bb)
 	    {
 	      int i;
               int n_labels = gimple_switch_num_labels (stmt);
-	      tree *info = XCNEWVEC (tree, last_basic_block);
+	      tree *info = XCNEWVEC (tree, last_basic_block_for_fn (cfun));
 	      edge e;
 	      edge_iterator ei;
 
diff --git a/gcc/tree-ssa-live.c b/gcc/tree-ssa-live.c
index 5d1a3b9..6ccf2fb 100644
--- a/gcc/tree-ssa-live.c
+++ b/gcc/tree-ssa-live.c
@@ -960,17 +960,17 @@ new_tree_live_info (var_map map)
 
   live = XNEW (struct tree_live_info_d);
   live->map = map;
-  live->num_blocks = last_basic_block;
+  live->num_blocks = last_basic_block_for_fn (cfun);
 
-  live->livein = XNEWVEC (bitmap_head, last_basic_block);
+  live->livein = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
   FOR_EACH_BB (bb)
     bitmap_initialize (&live->livein[bb->index], &liveness_bitmap_obstack);
 
-  live->liveout = XNEWVEC (bitmap_head, last_basic_block);
+  live->liveout = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
   FOR_EACH_BB (bb)
     bitmap_initialize (&live->liveout[bb->index], &liveness_bitmap_obstack);
 
-  live->work_stack = XNEWVEC (int, last_basic_block);
+  live->work_stack = XNEWVEC (int, last_basic_block_for_fn (cfun));
   live->stack_top = live->work_stack;
 
   live->global = BITMAP_ALLOC (&liveness_bitmap_obstack);
@@ -1043,7 +1043,7 @@ live_worklist (tree_live_info_p live)
 {
   unsigned b;
   basic_block bb;
-  sbitmap visited = sbitmap_alloc (last_basic_block + 1);
+  sbitmap visited = sbitmap_alloc (last_basic_block_for_fn (cfun) + 1);
   bitmap tmp = BITMAP_ALLOC (&liveness_bitmap_obstack);
 
   bitmap_clear (visited);
diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
index 6292576..3aaf2b2 100644
--- a/gcc/tree-ssa-loop-im.c
+++ b/gcc/tree-ssa-loop-im.c
@@ -2401,7 +2401,7 @@ fill_always_executed_in_1 (struct loop *loop, sbitmap contains_call)
 static void
 fill_always_executed_in (void)
 {
-  sbitmap contains_call = sbitmap_alloc (last_basic_block);
+  sbitmap contains_call = sbitmap_alloc (last_basic_block_for_fn (cfun));
   basic_block bb;
   struct loop *loop;
 
diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
index de667ad..76d5958 100644
--- a/gcc/tree-ssa-loop-manip.c
+++ b/gcc/tree-ssa-loop-manip.c
@@ -728,13 +728,13 @@ copy_phi_node_args (unsigned first_new_block)
 {
   unsigned i;
 
-  for (i = first_new_block; i < (unsigned) last_basic_block; i++)
+  for (i = first_new_block; i < (unsigned) last_basic_block_for_fn (cfun); i++)
     BASIC_BLOCK_FOR_FN (cfun, i)->flags |= BB_DUPLICATED;
 
-  for (i = first_new_block; i < (unsigned) last_basic_block; i++)
+  for (i = first_new_block; i < (unsigned) last_basic_block_for_fn (cfun); i++)
     add_phi_args_after_copy_bb (BASIC_BLOCK_FOR_FN (cfun, i));
 
-  for (i = first_new_block; i < (unsigned) last_basic_block; i++)
+  for (i = first_new_block; i < (unsigned) last_basic_block_for_fn (cfun); i++)
     BASIC_BLOCK_FOR_FN (cfun, i)->flags &= ~BB_DUPLICATED;
 }
 
@@ -772,7 +772,7 @@ gimple_duplicate_loop_to_header_edge (struct loop *loop, edge e,
     verify_loop_closed_ssa (true);
 #endif
 
-  first_new_block = last_basic_block;
+  first_new_block = last_basic_block_for_fn (cfun);
   if (!duplicate_loop_to_header_edge (loop, e, ndupl, wont_exit,
 				      orig, to_remove, flags))
     return false;
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index dcce38a..c1c5b4f 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -2442,7 +2442,7 @@ compute_antic (void)
 
   /* If any predecessor edges are abnormal, we punt, so antic_in is empty.
      We pre-build the map of blocks with incoming abnormal edges here.  */
-  has_abnormal_preds = sbitmap_alloc (last_basic_block);
+  has_abnormal_preds = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (has_abnormal_preds);
 
   FOR_ALL_BB (block)
@@ -2471,7 +2471,7 @@ compute_antic (void)
   /* At the exit block we anticipate nothing.  */
   BB_VISITED (EXIT_BLOCK_PTR_FOR_FN (cfun)) = 1;
 
-  changed_blocks = sbitmap_alloc (last_basic_block + 1);
+  changed_blocks = sbitmap_alloc (last_basic_block_for_fn (cfun) + 1);
   bitmap_ones (changed_blocks);
   while (changed)
     {
diff --git a/gcc/tree-ssa-propagate.c b/gcc/tree-ssa-propagate.c
index 783b651..55ae68b 100644
--- a/gcc/tree-ssa-propagate.c
+++ b/gcc/tree-ssa-propagate.c
@@ -495,10 +495,10 @@ ssa_prop_init (void)
   vec_alloc (interesting_ssa_edges, 20);
   vec_alloc (varying_ssa_edges, 20);
 
-  executable_blocks = sbitmap_alloc (last_basic_block);
+  executable_blocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (executable_blocks);
 
-  bb_in_list = sbitmap_alloc (last_basic_block);
+  bb_in_list = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (bb_in_list);
 
   if (dump_file && (dump_flags & TDF_DETAILS))
diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c
index 9108983..1392879 100644
--- a/gcc/tree-ssa-reassoc.c
+++ b/gcc/tree-ssa-reassoc.c
@@ -4564,7 +4564,7 @@ init_reassoc (void)
   /* Reverse RPO (Reverse Post Order) will give us something where
      deeper loops come later.  */
   pre_and_rev_post_order_compute (NULL, bbs, false);
-  bb_rank = XCNEWVEC (long, last_basic_block);
+  bb_rank = XCNEWVEC (long, last_basic_block_for_fn (cfun));
   operand_rank = pointer_map_create ();
 
   /* Give each default definition a distinct rank.  This includes
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index e98652c..c271778 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -3984,7 +3984,7 @@ init_scc_vn (void)
 
   shared_lookup_phiargs.create (0);
   shared_lookup_references.create (0);
-  rpo_numbers = XNEWVEC (int, last_basic_block);
+  rpo_numbers = XNEWVEC (int, last_basic_block_for_fn (cfun));
   rpo_numbers_temp =
     XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
   pre_and_rev_post_order_compute (NULL, rpo_numbers_temp, false);
diff --git a/gcc/tree-ssa-tail-merge.c b/gcc/tree-ssa-tail-merge.c
index fbcbf78..a0eac67 100644
--- a/gcc/tree-ssa-tail-merge.c
+++ b/gcc/tree-ssa-tail-merge.c
@@ -771,7 +771,7 @@ init_worklist (void)
 {
   alloc_aux_for_blocks (sizeof (struct aux_bb_info));
   same_succ_htab.create (n_basic_blocks_for_fn (cfun));
-  same_succ_edge_flags = XCNEWVEC (int, last_basic_block);
+  same_succ_edge_flags = XCNEWVEC (int, last_basic_block_for_fn (cfun));
   deleted_bbs = BITMAP_ALLOC (NULL);
   deleted_bb_preds = BITMAP_ALLOC (NULL);
   worklist.create (n_basic_blocks_for_fn (cfun));
diff --git a/gcc/tree-ssa-uncprop.c b/gcc/tree-ssa-uncprop.c
index 92652de..d38e0dd 100644
--- a/gcc/tree-ssa-uncprop.c
+++ b/gcc/tree-ssa-uncprop.c
@@ -179,7 +179,7 @@ associate_equivalences_with_edges (void)
 	      && !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (cond))
 	    {
 	      int i, n_labels = gimple_switch_num_labels (stmt);
-	      tree *info = XCNEWVEC (tree, last_basic_block);
+	      tree *info = XCNEWVEC (tree, last_basic_block_for_fn (cfun));
 
 	      /* Walk over the case label vector.  Record blocks
 		 which are reached by a single case label which represents
diff --git a/gcc/tree-stdarg.c b/gcc/tree-stdarg.c
index 5a22cfd..8b168e0 100644
--- a/gcc/tree-stdarg.c
+++ b/gcc/tree-stdarg.c
@@ -72,7 +72,7 @@ reachable_at_most_once (basic_block va_arg_bb, basic_block va_start_bb)
   if (! dominated_by_p (CDI_DOMINATORS, va_arg_bb, va_start_bb))
     return false;
 
-  visited = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (visited);
   ret = true;
 
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index 785e72f..06b6259 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -5934,13 +5934,13 @@ find_assert_locations_1 (basic_block bb, sbitmap live)
 static bool
 find_assert_locations (void)
 {
-  int *rpo = XNEWVEC (int, last_basic_block);
-  int *bb_rpo = XNEWVEC (int, last_basic_block);
-  int *last_rpo = XCNEWVEC (int, last_basic_block);
+  int *rpo = XNEWVEC (int, last_basic_block_for_fn (cfun));
+  int *bb_rpo = XNEWVEC (int, last_basic_block_for_fn (cfun));
+  int *last_rpo = XCNEWVEC (int, last_basic_block_for_fn (cfun));
   int rpo_cnt, i;
   bool need_asserts;
 
-  live = XCNEWVEC (sbitmap, last_basic_block);
+  live = XCNEWVEC (sbitmap, last_basic_block_for_fn (cfun));
   rpo_cnt = pre_and_rev_post_order_compute (NULL, rpo, false);
   for (i = 0; i < rpo_cnt; ++i)
     bb_rpo[rpo[i]] = i;
@@ -6034,7 +6034,7 @@ find_assert_locations (void)
   XDELETEVEC (rpo);
   XDELETEVEC (bb_rpo);
   XDELETEVEC (last_rpo);
-  for (i = 0; i < last_basic_block; ++i)
+  for (i = 0; i < last_basic_block_for_fn (cfun); ++i)
     if (live[i])
       sbitmap_free (live[i]);
   XDELETEVEC (live);
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 7d4a983..5bd0799 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -6928,7 +6928,7 @@ vt_find_locations (void)
   /* Compute reverse completion order of depth first search of the CFG
      so that the data-flow runs faster.  */
   rc_order = XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
-  bb_order = XNEWVEC (int, last_basic_block);
+  bb_order = XNEWVEC (int, last_basic_block_for_fn (cfun));
   pre_and_rev_post_order_compute (NULL, rc_order, false);
   for (i = 0; i < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; i++)
     bb_order[rc_order[i]] = i;
@@ -6936,9 +6936,9 @@ vt_find_locations (void)
 
   worklist = fibheap_new ();
   pending = fibheap_new ();
-  visited = sbitmap_alloc (last_basic_block);
-  in_worklist = sbitmap_alloc (last_basic_block);
-  in_pending = sbitmap_alloc (last_basic_block);
+  visited = sbitmap_alloc (last_basic_block_for_fn (cfun));
+  in_worklist = sbitmap_alloc (last_basic_block_for_fn (cfun));
+  in_pending = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (in_worklist);
 
   FOR_EACH_BB (bb)
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 13/13] Eliminate FOR_ALL_BB macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (10 preceding siblings ...)
  2013-12-06 15:09                     ` [PATCH 10/13] Eliminate last_basic_block macro David Malcolm
@ 2013-12-06 15:12                     ` David Malcolm
  2013-12-06 15:12                     ` [PATCH 06/13] Eliminate BASIC_BLOCK macro David Malcolm
  2013-12-06 15:39                     ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h Richard Biener
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 15:12 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (FOR_ALL_BB): Eliminate macro.

	* cfg.c (alloc_aux_for_blocks, clear_aux_for_blocks): Replace
	uses of FOR_ALL_BB with FOR_ALL_BB_FN, making uses of cfun explicit.

	* cfganal.c (inverted_post_order_compute): Likewise.
	* cfgcleanup.c (try_optimize_cfg): Likewise.
	* cfgexpand.c (add_scope_conflicts): Likewise.
	* cfghooks.c (dump_flow_info, account_profile_record): Likewise.
	* cfgrtl.c (relink_block_chain): Likewise.
	* dce.c (mark_artificial_uses): Likewise.
	* df-core.c (df_set_blocks, df_compute_cfg_image, df_dump): Likewise.
	* df-problems.c (df_lr_verify_solution_start,
	df_lr_verify_solution_end, df_lr_verify_transfer_functions,
	df_live_verify_solution_start, df_live_verify_solution_end,
	df_live_set_all_dirty, df_live_verify_transfer_functions,
	df_md_local_comput): Likewise.
	* df-scan.c (df_scan_free_internal, df_scan_alloc)
	df_reorganize_refs_by_insn, df_scan_verify): Likewise.
	* dominance.c (compute_dom_fast_query, calculate_dominance_info,
	free_dominance_info): Likewise.
	* dse.c (dse_step1, dse_step3, dse_step4, dse_step6): Likewise.
	* graph.c (draw_cfg_edges): Likewise.
	* graphite-scop-detection.c (print_graphite_scop_statistics,
	dot_all_scops_1): Likewise.
	* graphite.c (print_global_statistics,
	print_graphite_scop_statistics): Likewise.
	* ira.c (do_reload): Likewise.
	* loop-init.c (loop_optimizer_finalize): Likewise.
	* lto-streamer-in.c (input_function): Likewise.
	* lto-streamer-out.c (output_function): Likewise.
	* mcf.c (adjust_cfg_counts): Likewise.
	* predict.c (estimate_loops): Likewise.
	* sched-rgn.c (haifa_find_rgns): Likewise.
	* tree-cfg.c (split_critical_edges): Likewise.
 	* tree-dfa.c (renumber_gimple_stmt_uids): Likewise.
	* tree-loop-distribution.c (tree_loop_distribution): Likewise.
	* tree-ssa-pre.c (compute_antic, insert, init_pre): Likewise.
	* tree-ssa-propagate.c (ssa_prop_init): Likewise.
	* var-tracking.c (vt_initialize, vt_finalize): Likewise.
	* vtable-verify.c (vtable_verify_main): Likewise.
	* web.c (web_main): Likewise.
---
 gcc/basic-block.h             |  3 ---
 gcc/cfg.c                     |  4 ++--
 gcc/cfganal.c                 |  2 +-
 gcc/cfgcleanup.c              |  2 +-
 gcc/cfgexpand.c               |  4 ++--
 gcc/cfghooks.c                |  4 ++--
 gcc/cfgrtl.c                  |  2 +-
 gcc/dce.c                     |  2 +-
 gcc/df-core.c                 |  8 ++++----
 gcc/df-problems.c             | 22 +++++++++++-----------
 gcc/df-scan.c                 |  8 ++++----
 gcc/df.h                      |  2 +-
 gcc/dominance.c               |  6 +++---
 gcc/dse.c                     |  8 ++++----
 gcc/graph.c                   |  2 +-
 gcc/graphite-scop-detection.c |  6 +++---
 gcc/graphite.c                |  4 ++--
 gcc/ira.c                     |  4 ++--
 gcc/loop-init.c               |  2 +-
 gcc/lto-streamer-in.c         |  4 ++--
 gcc/lto-streamer-out.c        |  4 ++--
 gcc/mcf.c                     |  2 +-
 gcc/predict.c                 |  2 +-
 gcc/sched-rgn.c               |  2 +-
 gcc/tree-cfg.c                |  2 +-
 gcc/tree-dfa.c                |  2 +-
 gcc/tree-loop-distribution.c  |  2 +-
 gcc/tree-ssa-pre.c            |  8 ++++----
 gcc/tree-ssa-propagate.c      |  2 +-
 gcc/var-tracking.c            |  4 ++--
 gcc/vtable-verify.c           |  2 +-
 gcc/web.c                     |  6 +++---
 32 files changed, 67 insertions(+), 70 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 75f16ac..b323a1f 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -362,9 +362,6 @@ struct GTY(()) control_flow_graph {
 /* Cycles through _all_ basic blocks, even the fake ones (entry and
    exit block).  */
 
-#define FOR_ALL_BB(BB) \
-  for (BB = ENTRY_BLOCK_PTR_FOR_FN (cfun); BB; BB = BB->next_bb)
-
 #define FOR_ALL_BB_FN(BB, FN) \
   for (BB = ENTRY_BLOCK_PTR_FOR_FN (FN); BB; BB = BB->next_bb)
 
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 4f9d769..d4d00a4 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -576,7 +576,7 @@ alloc_aux_for_blocks (int size)
     {
       basic_block bb;
 
-      FOR_ALL_BB (bb)
+      FOR_ALL_BB_FN (bb, cfun)
 	alloc_aux_for_block (bb, size);
     }
 }
@@ -588,7 +588,7 @@ clear_aux_for_blocks (void)
 {
   basic_block bb;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     bb->aux = NULL;
 }
 
diff --git a/gcc/cfganal.c b/gcc/cfganal.c
index 3371b4a..d7e0382 100644
--- a/gcc/cfganal.c
+++ b/gcc/cfganal.c
@@ -784,7 +784,7 @@ inverted_post_order_compute (int *post_order)
   bitmap_clear (visited);
 
   /* Put all blocks that have no successor into the initial work list.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     if (EDGE_COUNT (bb->succs) == 0)
       {
         /* Push the initial edge on to the stack.  */
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index cf72c03..684ab0f 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -2864,7 +2864,7 @@ try_optimize_cfg (int mode)
       while (changed);
     }
 
-  FOR_ALL_BB (b)
+  FOR_ALL_BB_FN (b, cfun)
     b->flags &= ~(BB_FORWARDER_BLOCK | BB_NONTHREADABLE_BLOCK);
 
   return changed_overall;
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 56bcd80..a73bd41 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -498,7 +498,7 @@ add_scope_conflicts (void)
 
      We then do a mostly classical bitmap liveness algorithm.  */
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     bb->aux = BITMAP_ALLOC (&stack_var_bitmap_obstack);
 
   rpo = XNEWVEC (int, last_basic_block_for_fn (cfun));
@@ -525,7 +525,7 @@ add_scope_conflicts (void)
 
   free (rpo);
   BITMAP_FREE (work);
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     BITMAP_FREE (bb->aux);
 }
 
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index 78218b5..7a16887 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -325,7 +325,7 @@ dump_flow_info (FILE *file, int flags)
 
   fprintf (file, "\n%d basic blocks, %d edges.\n", n_basic_blocks_for_fn (cfun),
 	   n_edges_for_fn (cfun));
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     dump_bb (file, bb, 0, flags);
 
   putc ('\n', file);
@@ -1408,7 +1408,7 @@ account_profile_record (struct profile_record *record, int after_pass)
   int sum;
   gcov_type lsum;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
    {
       if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
 	  && profile_status_for_fn (cfun) != PROFILE_ABSENT)
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 7734ac1..1a63249 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -3619,7 +3619,7 @@ relink_block_chain (bool stay_in_cfglayout_mode)
   EXIT_BLOCK_PTR_FOR_FN (cfun)->prev_bb = prev_bb;
 
   /* Then, clean up the aux fields.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bb->aux = NULL;
       if (!stay_in_cfglayout_mode)
diff --git a/gcc/dce.c b/gcc/dce.c
index 843dfc6..7e8278f 100644
--- a/gcc/dce.c
+++ b/gcc/dce.c
@@ -663,7 +663,7 @@ mark_artificial_uses (void)
   struct df_link *defs;
   df_ref *use_rec;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       for (use_rec = df_get_artificial_uses (bb->index);
 	   *use_rec; use_rec++)
diff --git a/gcc/df-core.c b/gcc/df-core.c
index ba57d39..045b54f 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -549,7 +549,7 @@ df_set_blocks (bitmap blocks)
 		    {
 		      basic_block bb;
 		      bitmap_initialize (&blocks_to_reset, &df_bitmap_obstack);
-		      FOR_ALL_BB (bb)
+		      FOR_ALL_BB_FN (bb, cfun)
 			{
 			  bitmap_set_bit (&blocks_to_reset, bb->index);
 			}
@@ -1720,7 +1720,7 @@ df_compute_cfg_image (void)
   int i;
   int * map;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       size += EDGE_COUNT (bb->succs);
     }
@@ -1728,7 +1728,7 @@ df_compute_cfg_image (void)
   map = XNEWVEC (int, size);
   map[0] = size;
   i = 1;
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       edge_iterator ei;
       edge e;
@@ -2021,7 +2021,7 @@ df_dump (FILE *file)
   basic_block bb;
   df_dump_start (file);
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       df_print_bb_index (bb, file);
       df_dump_top (bb, file);
diff --git a/gcc/df-problems.c b/gcc/df-problems.c
index 70f7254..4b926b6 100644
--- a/gcc/df-problems.c
+++ b/gcc/df-problems.c
@@ -1176,7 +1176,7 @@ df_lr_verify_solution_start (void)
   problem_data->in = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
   problem_data->out = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bitmap_initialize (&problem_data->in[bb->index], &problem_data->lr_bitmaps);
       bitmap_initialize (&problem_data->out[bb->index], &problem_data->lr_bitmaps);
@@ -1205,7 +1205,7 @@ df_lr_verify_solution_end (void)
        in df_lr_finalize for details.  */
     df_lr->solutions_dirty = false;
   else
-    FOR_ALL_BB (bb)
+    FOR_ALL_BB_FN (bb, cfun)
       {
 	if ((!bitmap_equal_p (&problem_data->in[bb->index], DF_LR_IN (bb)))
 	    || (!bitmap_equal_p (&problem_data->out[bb->index], DF_LR_OUT (bb))))
@@ -1217,7 +1217,7 @@ df_lr_verify_solution_end (void)
 
   /* Cannot delete them immediately because you may want to dump them
      if the comparison fails.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bitmap_clear (&problem_data->in[bb->index]);
       bitmap_clear (&problem_data->out[bb->index]);
@@ -1294,7 +1294,7 @@ df_lr_verify_transfer_functions (void)
   bitmap_initialize (&saved_use, &bitmap_default_obstack);
   bitmap_initialize (&all_blocks, &bitmap_default_obstack);
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       struct df_lr_bb_info *bb_info = df_lr_get_bb_info (bb->index);
       bitmap_set_bit (&all_blocks, bb->index);
@@ -1713,7 +1713,7 @@ df_live_verify_solution_start (void)
   problem_data->in = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
   problem_data->out = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bitmap_initialize (&problem_data->in[bb->index], &problem_data->live_bitmaps);
       bitmap_initialize (&problem_data->out[bb->index], &problem_data->live_bitmaps);
@@ -1736,7 +1736,7 @@ df_live_verify_solution_end (void)
   if (!problem_data->out)
     return;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       if ((!bitmap_equal_p (&problem_data->in[bb->index], DF_LIVE_IN (bb)))
 	  || (!bitmap_equal_p (&problem_data->out[bb->index], DF_LIVE_OUT (bb))))
@@ -1748,7 +1748,7 @@ df_live_verify_solution_end (void)
 
   /* Cannot delete them immediately because you may want to dump them
      if the comparison fails.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bitmap_clear (&problem_data->in[bb->index]);
       bitmap_clear (&problem_data->out[bb->index]);
@@ -1814,7 +1814,7 @@ void
 df_live_set_all_dirty (void)
 {
   basic_block bb;
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     bitmap_set_bit (df_live->out_of_date_transfer_functions,
 		    bb->index);
 }
@@ -1840,7 +1840,7 @@ df_live_verify_transfer_functions (void)
 
   df_grow_insn_info ();
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       struct df_live_bb_info *bb_info = df_live_get_bb_info (bb->index);
       bitmap_set_bit (&all_blocks, bb->index);
@@ -4316,7 +4316,7 @@ df_md_local_compute (bitmap all_blocks)
   bitmap_clear (&seen_in_insn);
 
   frontiers = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     bitmap_initialize (&frontiers[bb->index], &bitmap_default_obstack);
 
   compute_dominance_frontiers (frontiers);
@@ -4334,7 +4334,7 @@ df_md_local_compute (bitmap all_blocks)
 	}
     }
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     bitmap_clear (&frontiers[bb->index]);
   free (frontiers);
 }
diff --git a/gcc/df-scan.c b/gcc/df-scan.c
index 9f6f67a..a35b12f 100644
--- a/gcc/df-scan.c
+++ b/gcc/df-scan.c
@@ -213,7 +213,7 @@ df_scan_free_internal (void)
 	}
     }
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       unsigned int bb_index = bb->index;
       struct df_scan_bb_info *bb_info = df_scan_get_bb_info (bb_index);
@@ -355,7 +355,7 @@ df_scan_alloc (bitmap all_blocks ATTRIBUTE_UNUSED)
   df_grow_insn_info ();
   df_grow_bb_info (df_scan);
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       unsigned int bb_index = bb->index;
       struct df_scan_bb_info *bb_info = df_scan_get_bb_info (bb_index);
@@ -1887,7 +1887,7 @@ df_reorganize_refs_by_insn (struct df_ref_info *ref_info,
     }
   else
     {
-      FOR_ALL_BB (bb)
+      FOR_ALL_BB_FN (bb, cfun)
 	offset = df_reorganize_refs_by_insn_bb (bb, offset, ref_info,
 						include_defs, include_uses,
 						include_eq_uses);
@@ -4569,7 +4569,7 @@ df_scan_verify (void)
      clear a mark that has not been set as this means that the ref in
      the block or insn was not in the reg chain.  */
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     df_bb_verify (bb);
 
   /* (4) See if all reg chains are traversed a second time.  This time
diff --git a/gcc/df.h b/gcc/df.h
index e3ca67b..579712c 100644
--- a/gcc/df.h
+++ b/gcc/df.h
@@ -176,7 +176,7 @@ enum df_ref_order
     DF_REF_ORDER_BY_REG_WITH_NOTES,
 
     /* Organize the refs in insn order.  The insns are ordered within a
-       block, and the blocks are ordered by FOR_ALL_BB.  */
+       block, and the blocks are ordered by FOR_ALL_BB_FN.  */
     DF_REF_ORDER_BY_INSN,
 
     /* For uses, the refs within eq notes may be added for
diff --git a/gcc/dominance.c b/gcc/dominance.c
index 69816c1..77f9471 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -624,7 +624,7 @@ compute_dom_fast_query (enum cdi_direction dir)
   if (dom_computed[dir_index] == DOM_OK)
     return;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       if (!bb->dom[dir_index]->father)
 	assign_dfs_numbers (bb->dom[dir_index], &num);
@@ -652,7 +652,7 @@ calculate_dominance_info (enum cdi_direction dir)
     {
       gcc_assert (!n_bbs_in_dom_tree[dir_index]);
 
-      FOR_ALL_BB (b)
+      FOR_ALL_BB_FN (b, cfun)
 	{
 	  b->dom[dir_index] = et_new_tree (b);
 	}
@@ -689,7 +689,7 @@ free_dominance_info (enum cdi_direction dir)
   if (!dom_info_available_p (dir))
     return;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       et_free_tree_force (bb->dom[dir_index]);
       bb->dom[dir_index] = NULL;
diff --git a/gcc/dse.c b/gcc/dse.c
index e5b0850..958097d 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -2708,7 +2708,7 @@ dse_step1 (void)
   bitmap_set_bit (all_blocks, ENTRY_BLOCK);
   bitmap_set_bit (all_blocks, EXIT_BLOCK);
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       insn_info_t ptr;
       bb_info_t bb_info = (bb_info_t) pool_alloc (bb_info_pool);
@@ -3290,7 +3290,7 @@ dse_step3 (bool for_spills)
 
   bitmap_ones (unreachable_blocks);
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bb_info_t bb_info = bb_table[bb->index];
       if (bb_info->gen)
@@ -3469,7 +3469,7 @@ dse_step4 (void)
       basic_block bb;
 
       fprintf (dump_file, "\n\n*** Global dataflow info after analysis.\n");
-      FOR_ALL_BB (bb)
+      FOR_ALL_BB_FN (bb, cfun)
 	{
 	  bb_info_t bb_info = bb_table[bb->index];
 
@@ -3617,7 +3617,7 @@ dse_step6 (void)
 {
   basic_block bb;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bb_info_t bb_info = bb_table[bb->index];
       insn_info_t insn_info = bb_info->last_insn;
diff --git a/gcc/graph.c b/gcc/graph.c
index 6c405d8..545de44 100644
--- a/gcc/graph.c
+++ b/gcc/graph.c
@@ -255,7 +255,7 @@ draw_cfg_edges (pretty_printer *pp, struct function *fun)
 {
   basic_block bb;
   mark_dfs_back_edges ();
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     draw_cfg_node_succ_edges (pp, fun->funcdef_no, bb);
 
   /* Add an invisible edge from ENTRY to EXIT, to improve the graph layout.  */
diff --git a/gcc/graphite-scop-detection.c b/gcc/graphite-scop-detection.c
index a8db98d..fea15e5 100644
--- a/gcc/graphite-scop-detection.c
+++ b/gcc/graphite-scop-detection.c
@@ -1114,7 +1114,7 @@ print_graphite_scop_statistics (FILE* file, scop_p scop)
 
   basic_block bb;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator psi;
       loop_p loop = bb->loop_father;
@@ -1450,7 +1450,7 @@ dot_all_scops_1 (FILE *file, vec<scop_p> scops)
 
   fprintf (file, "digraph all {\n");
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       int part_of_scop = false;
 
@@ -1557,7 +1557,7 @@ dot_all_scops_1 (FILE *file, vec<scop_p> scops)
       fprintf (file, "  </TABLE>>, shape=box, style=\"setlinewidth(0)\"]\n");
     }
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       FOR_EACH_EDGE (e, ei, bb->succs)
 	      fprintf (file, "%d -> %d;\n", bb->index, e->dest->index);
diff --git a/gcc/graphite.c b/gcc/graphite.c
index a573ea7..8af0402 100644
--- a/gcc/graphite.c
+++ b/gcc/graphite.c
@@ -94,7 +94,7 @@ print_global_statistics (FILE* file)
 
   basic_block bb;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator psi;
 
@@ -150,7 +150,7 @@ print_graphite_scop_statistics (FILE* file, scop_p scop)
 
   basic_block bb;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator psi;
       loop_p loop = bb->loop_father;
diff --git a/gcc/ira.c b/gcc/ira.c
index 7403870..d6462ca 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -5443,7 +5443,7 @@ do_reload (void)
 	  loop_optimizer_finalize ();
 	  free_dominance_info (CDI_DOMINATORS);
 	}
-      FOR_ALL_BB (bb)
+      FOR_ALL_BB_FN (bb, cfun)
 	bb->loop_father = NULL;
       current_loops = NULL;
       
@@ -5492,7 +5492,7 @@ do_reload (void)
 	  loop_optimizer_finalize ();
 	  free_dominance_info (CDI_DOMINATORS);
 	}
-      FOR_ALL_BB (bb)
+      FOR_ALL_BB_FN (bb, cfun)
 	bb->loop_father = NULL;
       current_loops = NULL;
       
diff --git a/gcc/loop-init.c b/gcc/loop-init.c
index 3dc6953..8c5553b 100644
--- a/gcc/loop-init.c
+++ b/gcc/loop-init.c
@@ -169,7 +169,7 @@ loop_optimizer_finalize (void)
   ggc_free (current_loops);
   current_loops = NULL;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bb->loop_father = NULL;
     }
diff --git a/gcc/lto-streamer-in.c b/gcc/lto-streamer-in.c
index 8dc94bd..9d4466b 100644
--- a/gcc/lto-streamer-in.c
+++ b/gcc/lto-streamer-in.c
@@ -976,7 +976,7 @@ input_function (tree fn_decl, struct data_in *data_in,
   /* Fix up the call statements that are mentioned in the callgraph
      edges.  */
   set_gimple_stmt_max_uid (cfun, 0);
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -991,7 +991,7 @@ input_function (tree fn_decl, struct data_in *data_in,
 	}
     }
   stmts = (gimple *) xcalloc (gimple_stmt_max_uid (fn), sizeof (gimple));
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator bsi = gsi_start_phis (bb);
       while (!gsi_end_p (bsi))
diff --git a/gcc/lto-streamer-out.c b/gcc/lto-streamer-out.c
index 615cc84..205518f 100644
--- a/gcc/lto-streamer-out.c
+++ b/gcc/lto-streamer-out.c
@@ -1868,7 +1868,7 @@ output_function (struct cgraph_node *node)
 	 virtual PHIs get re-computed on-the-fly which would make numbers
 	 inconsistent.  */
       set_gimple_stmt_max_uid (cfun, 0);
-      FOR_ALL_BB (bb)
+      FOR_ALL_BB_FN (bb, cfun)
 	{
 	  gimple_stmt_iterator gsi;
 	  for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -1887,7 +1887,7 @@ output_function (struct cgraph_node *node)
 	}
       /* To avoid keeping duplicate gimple IDs in the statements, renumber
 	 virtual phis now.  */
-      FOR_ALL_BB (bb)
+      FOR_ALL_BB_FN (bb, cfun)
 	{
 	  gimple_stmt_iterator gsi;
 	  for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
diff --git a/gcc/mcf.c b/gcc/mcf.c
index f9b5505..146b43c 100644
--- a/gcc/mcf.c
+++ b/gcc/mcf.c
@@ -1245,7 +1245,7 @@ adjust_cfg_counts (fixup_graph_type *fixup_graph)
 		     sum_edge_counts (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds);
 
   /* Compute edge probabilities.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       if (bb->count)
         {
diff --git a/gcc/predict.c b/gcc/predict.c
index 78efb72..a5ad34f 100644
--- a/gcc/predict.c
+++ b/gcc/predict.c
@@ -2757,7 +2757,7 @@ estimate_loops (void)
     estimate_loops_at_level (current_loops->tree_root->inner);
 
   /* Now propagate the frequencies through all the blocks.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       bitmap_set_bit (tovisit, bb->index);
     }
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index 7fa9759..863cd1d 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -745,7 +745,7 @@ haifa_find_rgns (void)
     }
 
   /* Reset ->aux field used by EDGE_PASSED.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       edge_iterator ei;
       edge e;
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 98434ac..03e177a 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -7940,7 +7940,7 @@ split_critical_edges (void)
      expensive.  So we want to enable recording of edge to CASE_LABEL_EXPR
      mappings around the calls to split_edge.  */
   start_recording_case_labels ();
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       FOR_EACH_EDGE (e, ei, bb->succs)
         {
diff --git a/gcc/tree-dfa.c b/gcc/tree-dfa.c
index 2d964d5..302822c 100644
--- a/gcc/tree-dfa.c
+++ b/gcc/tree-dfa.c
@@ -80,7 +80,7 @@ renumber_gimple_stmt_uids (void)
   basic_block bb;
 
   set_gimple_stmt_max_uid (cfun, 0);
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator bsi;
       for (bsi = gsi_start_phis (bb); !gsi_end_p (bsi); gsi_next (&bsi))
diff --git a/gcc/tree-loop-distribution.c b/gcc/tree-loop-distribution.c
index abf69f4..7d86b08 100644
--- a/gcc/tree-loop-distribution.c
+++ b/gcc/tree-loop-distribution.c
@@ -1677,7 +1677,7 @@ tree_loop_distribution (void)
   basic_block bb;
   control_dependences *cd = NULL;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator gsi;
       for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index c1c5b4f..bceea77 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -2445,7 +2445,7 @@ compute_antic (void)
   has_abnormal_preds = sbitmap_alloc (last_basic_block_for_fn (cfun));
   bitmap_clear (has_abnormal_preds);
 
-  FOR_ALL_BB (block)
+  FOR_ALL_BB_FN (block, cfun)
     {
       edge_iterator ei;
       edge e;
@@ -3660,7 +3660,7 @@ insert (void)
   basic_block bb;
   int num_iterations = 0;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     NEW_SETS (bb) = bitmap_set_new ();
 
   while (new_stuff)
@@ -3673,7 +3673,7 @@ insert (void)
       /* Clear the NEW sets before the next iteration.  We have already
          fully propagated its contents.  */
       if (new_stuff)
-	FOR_ALL_BB (bb)
+	FOR_ALL_BB_FN (bb, cfun)
 	  bitmap_set_free (NEW_SETS (bb));
     }
   statistics_histogram_event (cfun, "insert iterations", num_iterations);
@@ -4672,7 +4672,7 @@ init_pre (void)
 				       sizeof (struct bitmap_set), 30);
   pre_expr_pool = create_alloc_pool ("pre_expr nodes",
 				     sizeof (struct pre_expr_d), 30);
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       EXP_GEN (bb) = bitmap_set_new ();
       PHI_GEN (bb) = bitmap_set_new ();
diff --git a/gcc/tree-ssa-propagate.c b/gcc/tree-ssa-propagate.c
index f9f084b..fc8041f 100644
--- a/gcc/tree-ssa-propagate.c
+++ b/gcc/tree-ssa-propagate.c
@@ -509,7 +509,7 @@ ssa_prop_init (void)
 
   /* Initially assume that every edge in the CFG is not executable.
      (including the edges coming out of the entry block).  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       gimple_stmt_iterator si;
 
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 175ec01..f38cbe1 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -9847,7 +9847,7 @@ vt_initialize (void)
   changed_variables.create (10);
 
   /* Init the IN and OUT sets.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       VTI (bb)->visited = false;
       VTI (bb)->flooded = false;
@@ -10186,7 +10186,7 @@ vt_finalize (void)
       VTI (bb)->mos.release ();
     }
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     {
       dataflow_set_destroy (&VTI (bb)->in);
       dataflow_set_destroy (&VTI (bb)->out);
diff --git a/gcc/vtable-verify.c b/gcc/vtable-verify.c
index 46c5621..401f40b 100644
--- a/gcc/vtable-verify.c
+++ b/gcc/vtable-verify.c
@@ -746,7 +746,7 @@ vtable_verify_main (void)
   unsigned int ret = 1;
   basic_block bb;
 
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
       verify_bb_vtables (bb);
 
   return ret;
diff --git a/gcc/web.c b/gcc/web.c
index 8e8c465..d281f45 100644
--- a/gcc/web.c
+++ b/gcc/web.c
@@ -351,7 +351,7 @@ web_main (void)
   df_set_flags (DF_DEFER_INSN_RESCAN);
 
   /* Assign ids to the uses.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
     {
       unsigned int uid = INSN_UID (insn);
@@ -379,7 +379,7 @@ web_main (void)
   use_entry = XCNEWVEC (struct web_entry, uses_num);
 
   /* Produce the web.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
     {
       unsigned int uid = INSN_UID (insn);
@@ -404,7 +404,7 @@ web_main (void)
 
   /* Update the instruction stream, allocating new registers for split pseudos
      in progress.  */
-  FOR_ALL_BB (bb)
+  FOR_ALL_BB_FN (bb, cfun)
     FOR_BB_INSNS (bb, insn)
     {
       unsigned int uid = INSN_UID (insn);
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 06/13] Eliminate BASIC_BLOCK macro.
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (11 preceding siblings ...)
  2013-12-06 15:12                     ` [PATCH 13/13] Eliminate FOR_ALL_BB macro David Malcolm
@ 2013-12-06 15:12                     ` David Malcolm
  2013-12-06 15:39                     ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h Richard Biener
  13 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-06 15:12 UTC (permalink / raw)
  To: Richard Biener; +Cc: gcc-patches, David Malcolm

gcc/
	* basic-block.h (BASIC_BLOCK): Eliminate macro.

	* alias.c (init_alias_analysis): Eliminate BASIC_BLOCK macro in
	favor of uses of BASIC_BLOCK_FOR_FN, making uses of cfun explicit.
	* bt-load.c (compute_defs_uses_and_gen, compute_out, link_btr_uses,
	block_at_edge_of_live_range_p, migrate_btr_defs): Likewise.
	* caller-save.c (insert_one_insn): Likewise.
	* cfg.c (debug_bb, get_bb_original, get_bb_copy): Likewise.
	* cfgexpand.c (add_scope_conflicts): Likewise.
	* cfghooks.c (verify_flow_info): Likewise.
	* cfgloop.c (flow_loops_find): Likewise.
	* cfgrtl.c (rtl_flow_call_edges_add): Likewise.
	* config/mips/mips.c (r10k_insert_cache_barriers): Likewise.
	* config/s390/s390.c (s390_optimize_nonescaping_tx): Likewise.
	* config/spu/spu.c (spu_machine_dependent_reorg): Likewise.
	* cse.c (cse_main): Likewise.
	* dce.c (fast_dce): Likewise.
	* df-core.c (df_set_blocks, df_worklist_propagate_forward,
	df_worklist_propagate_backward, df_worklist_dataflow_doublequeue,
	df_bb_replace, df_dump_region): Likewise.
	* df-problems.c (df_rd_bb_local_compute, df_lr_bb_local_compute,
	df_live_bb_local_compute, df_chain_remove_problem)
	df_chain_create_bb, df_word_lr_bb_local_compute, df_note_bb_compute,
	df_md_bb_local_compute, df_md_local_compute,
	df_md_transfer_function): Likewise.
	* df-scan.c (df_scan_blocks, df_reorganize_refs_by_reg_by_insn,
	df_reorganize_refs_by_insn, df_bb_refs_collect,
	df_record_entry_block_defs, df_update_entry_block_defs,
	df_record_exit_block_uses): Likewise.
	* dominance.c (nearest_common_dominator_for_set): Likewise.
	* gcse.c (hoist_code): Likewise.
	* graph.c (draw_cfg_nodes_no_loops): Likewise.
	* ipa-inline-analysis.c (param_change_prob,
	estimate_function_body_sizes): Likewise.
	* ipa-split.c (dominated_by_forbidden): Likewise.
	* loop-unroll.c (apply_opt_in_copies): Likewise.
	* lower-subreg.c (decompose_multiword_subregs): Likewise.
	* lra-lives.c (lra_create_live_ranges): Likewise.
	* predict.c (propagate_freq): Likewise.
	* regrename.c (regrename_analyze): Likewise.
	* regstat.c (regstat_bb_compute_ri,
	regstat_bb_compute_calls_crossed): Likewise.
	* resource.c (mark_target_live_regs): Likewise.
	* sched-ebb.c (ebb_fix_recovery_cfg): Likewise.
	* sched-int.h (EBB_FIRST_BB, EBB_LAST_BB): Likewise.
	* sched-rgn.c (debug_region, dump_region_dot, too_large,
	haifa_find_rgns, extend_rgns, compute_dom_prob_ps, update_live,
	propagate_deps, sched_is_disabled_for_current_region_p): Likewise.
	* sched-vis.c (debug_bb_n_slim): Likewise.
	* sel-sched-ir.c (sel_finish_global_and_expr, verify_backedges,
	purge_empty_blocks, sel_remove_loop_preheader): Likewise.
	* sel-sched.c (remove_insns_that_need_bookkeeping)
	(current_region_empty_p, sel_region_init,
	simplify_changed_insns): Likewise.
	* trans-mem.c (execute_tm_mark, execute_tm_edges,
	tm_memopt_compute_antic, ipa_tm_scan_irr_function): Likewise.
	* tree-cfg.c (make_edges, end_recording_case_labels,
	label_to_block_fn, gimple_debug_bb, gimple_flow_call_edges_add,
	remove_edge_and_dominated_blocks, remove_edge_and_dominated_blocks,
	gimple_purge_all_dead_eh_edges,
	gimple_purge_all_dead_abnormal_call_edges): Likewise.
	* tree-cfgcleanup.c (fixup_noreturn_call,
	split_bbs_on_noreturn_calls, cleanup_tree_cfg_1): Likewise.
	* tree-inline.c (copy_cfg_body, fold_marked_statements): Likewise.
	* tree-into-ssa.c (set_livein_block, prune_unused_phi_nodes,
	insert_phi_nodes_for, insert_updated_phi_nodes_for): Likewise.
	* tree-ssa-dom.c (tree_ssa_dominator_optimize): Likewise.
	* tree-ssa-live.c (live_worklist): Likewise.
	* tree-ssa-loop-manip.c (compute_live_loop_exits,
	add_exit_phis_var, find_uses_to_rename, copy_phi_node_args): Likewise.
	* tree-ssa-pre.c (compute_antic): Likewise.
	* tree-ssa-reassoc.c (update_range_test, optimize_range_tests): Likewise.
	* tree-ssa-sink.c (nearest_common_dominator_of_uses): Likewise.
	* tree-ssa-tail-merge.c (same_succ_hash, same_succ_def::equal,
	same_succ_flush_bbs, update_worklist, set_cluster,
	same_phi_alternatives, find_clusters_1, apply_clusters,
	update_debug_stmts): Likewise.
	* tree-ssa-threadupdate.c (mark_threaded_blocks,
	thread_through_all_blocks): Likewise.
	* tree-ssa-uncprop.c (associate_equivalences_with_edges): Likewise.
	* tree-vrp.c (find_assert_locations): Likewise.
---
 gcc/alias.c                 |  2 +-
 gcc/basic-block.h           |  2 --
 gcc/bt-load.c               | 15 ++++++++-------
 gcc/caller-save.c           |  8 ++++----
 gcc/cfg.c                   |  6 +++---
 gcc/cfgexpand.c             |  2 +-
 gcc/cfghooks.c              |  2 +-
 gcc/cfgloop.c               |  2 +-
 gcc/cfgrtl.c                |  2 +-
 gcc/config/mips/mips.c      |  2 +-
 gcc/config/s390/s390.c      |  2 +-
 gcc/config/spu/spu.c        |  2 +-
 gcc/cse.c                   |  2 +-
 gcc/dce.c                   |  2 +-
 gcc/df-core.c               | 18 +++++++++---------
 gcc/df-problems.c           | 20 ++++++++++----------
 gcc/df-scan.c               | 26 ++++++++++++++++----------
 gcc/dominance.c             |  6 +++---
 gcc/gcse.c                  |  4 ++--
 gcc/graph.c                 |  2 +-
 gcc/ipa-inline-analysis.c   |  4 ++--
 gcc/ipa-split.c             |  3 ++-
 gcc/loop-unroll.c           |  4 ++--
 gcc/lower-subreg.c          |  2 +-
 gcc/lra-lives.c             |  2 +-
 gcc/predict.c               |  2 +-
 gcc/regrename.c             |  2 +-
 gcc/regstat.c               |  4 ++--
 gcc/resource.c              |  7 ++++---
 gcc/sched-ebb.c             |  2 +-
 gcc/sched-int.h             |  5 +++--
 gcc/sched-rgn.c             | 32 ++++++++++++++++++++------------
 gcc/sched-vis.c             |  2 +-
 gcc/sel-sched-ir.c          |  8 ++++----
 gcc/sel-sched.c             | 18 ++++++++++--------
 gcc/trans-mem.c             |  9 +++++----
 gcc/tree-cfg.c              | 22 ++++++++++++----------
 gcc/tree-cfgcleanup.c       |  8 ++++----
 gcc/tree-inline.c           | 19 +++++++++++--------
 gcc/tree-into-ssa.c         | 18 ++++++++++--------
 gcc/tree-ssa-dom.c          |  2 +-
 gcc/tree-ssa-live.c         |  2 +-
 gcc/tree-ssa-loop-manip.c   | 12 ++++++------
 gcc/tree-ssa-pre.c          |  4 ++--
 gcc/tree-ssa-reassoc.c      |  6 ++++--
 gcc/tree-ssa-sink.c         |  4 ++--
 gcc/tree-ssa-tail-merge.c   | 26 +++++++++++++-------------
 gcc/tree-ssa-threadupdate.c |  8 ++++----
 gcc/tree-ssa-uncprop.c      |  3 ++-
 gcc/tree-vrp.c              |  2 +-
 50 files changed, 199 insertions(+), 170 deletions(-)

diff --git a/gcc/alias.c b/gcc/alias.c
index 6a73b09..6290c83 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -2989,7 +2989,7 @@ init_alias_analysis (void)
       /* Walk the insns adding values to the new_reg_base_value array.  */
       for (i = 0; i < rpo_cnt; i++)
 	{
-	  basic_block bb = BASIC_BLOCK (rpo[i]);
+	  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]);
 	  FOR_BB_INSNS (bb, insn)
 	    {
 	      if (NONDEBUG_INSN_P (insn))
diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index f759e27..3bd011e 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -332,8 +332,6 @@ struct GTY(()) control_flow_graph {
 #define label_to_block_map	(cfun->cfg->x_label_to_block_map)
 #define profile_status		(cfun->cfg->x_profile_status)
 
-#define BASIC_BLOCK(N)		((*basic_block_info)[(N)])
-
 /* For iterating over basic blocks.  */
 #define FOR_BB_BETWEEN(BB, FROM, TO, DIR) \
   for (BB = FROM; BB != TO; BB = BB->DIR)
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index 09eea06..bbd0dd8 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -460,7 +460,7 @@ compute_defs_uses_and_gen (fibheap_t all_btr_defs, btr_def *def_array,
   bitmap_vector_clear (bb_gen, last_basic_block);
   for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
       int reg;
       btr_def defs_this_bb = NULL;
       rtx insn;
@@ -651,7 +651,7 @@ compute_out (sbitmap *bb_out, sbitmap *bb_gen, sbitmap *bb_kill, int max_uid)
       changed = 0;
       for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
 	{
-	  bitmap_union_of_preds (bb_in, bb_out, BASIC_BLOCK (i));
+	  bitmap_union_of_preds (bb_in, bb_out, BASIC_BLOCK_FOR_FN (cfun, i));
 	  changed |= bitmap_ior_and_compl (bb_out[i], bb_gen[i],
 					       bb_in, bb_kill[i]);
 	}
@@ -670,11 +670,11 @@ link_btr_uses (btr_def *def_array, btr_user *use_array, sbitmap *bb_out,
      Count up the number of reaching defs of each use.  */
   for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
       rtx insn;
       rtx last;
 
-      bitmap_union_of_preds (reaching_defs, bb_out, BASIC_BLOCK (i));
+      bitmap_union_of_preds (reaching_defs, bb_out, BASIC_BLOCK_FOR_FN (cfun, i));
       for (insn = BB_HEAD (bb), last = NEXT_INSN (BB_END (bb));
 	   insn != last;
 	   insn = NEXT_INSN (insn))
@@ -814,13 +814,14 @@ build_btr_def_use_webs (fibheap_t all_btr_defs)
 static int
 block_at_edge_of_live_range_p (int bb, btr_def def)
 {
-  if (def->other_btr_uses_before_def && BASIC_BLOCK (bb) == def->bb)
+  if (def->other_btr_uses_before_def
+      && BASIC_BLOCK_FOR_FN (cfun, bb) == def->bb)
     return 1;
   else if (def->other_btr_uses_after_use)
     {
       btr_user user;
       for (user = def->uses; user != NULL; user = user->next)
-	if (BASIC_BLOCK (bb) == user->bb)
+	if (BASIC_BLOCK_FOR_FN (cfun, bb) == user->bb)
 	  return 1;
     }
   return 0;
@@ -1406,7 +1407,7 @@ migrate_btr_defs (enum reg_class btr_class, int allow_callee_save)
 
       for (i = NUM_FIXED_BLOCKS; i < last_basic_block; i++)
 	{
-	  basic_block bb = BASIC_BLOCK (i);
+	  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 	  fprintf (dump_file,
 		   "Basic block %d: count = " HOST_WIDEST_INT_PRINT_DEC
 		   " loop-depth = %d idom = %d\n",
diff --git a/gcc/caller-save.c b/gcc/caller-save.c
index b134cde..628fc0b 100644
--- a/gcc/caller-save.c
+++ b/gcc/caller-save.c
@@ -1414,8 +1414,8 @@ insert_one_insn (struct insn_chain *chain, int before_p, int code, rtx pat)
 		     &new_chain->live_throughout);
 
       CLEAR_REG_SET (&new_chain->dead_or_set);
-      if (chain->insn == BB_HEAD (BASIC_BLOCK (chain->block)))
-	BB_HEAD (BASIC_BLOCK (chain->block)) = new_chain->insn;
+      if (chain->insn == BB_HEAD (BASIC_BLOCK_FOR_FN (cfun, chain->block)))
+	BB_HEAD (BASIC_BLOCK_FOR_FN (cfun, chain->block)) = new_chain->insn;
     }
   else
     {
@@ -1434,8 +1434,8 @@ insert_one_insn (struct insn_chain *chain, int before_p, int code, rtx pat)
       note_stores (PATTERN (chain->insn), add_stored_regs,
 		   &new_chain->live_throughout);
       CLEAR_REG_SET (&new_chain->dead_or_set);
-      if (chain->insn == BB_END (BASIC_BLOCK (chain->block)))
-	BB_END (BASIC_BLOCK (chain->block)) = new_chain->insn;
+      if (chain->insn == BB_END (BASIC_BLOCK_FOR_FN (cfun, chain->block)))
+	BB_END (BASIC_BLOCK_FOR_FN (cfun, chain->block)) = new_chain->insn;
     }
   new_chain->block = chain->block;
   new_chain->is_caller_save_insn = 1;
diff --git a/gcc/cfg.c b/gcc/cfg.c
index f386168..3337372 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -690,7 +690,7 @@ debug_bb (basic_block bb)
 DEBUG_FUNCTION basic_block
 debug_bb_n (int n)
 {
-  basic_block bb = BASIC_BLOCK (n);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, n);
   debug_bb (bb);
   return bb;
 }
@@ -1139,7 +1139,7 @@ get_bb_original (basic_block bb)
   key.index1 = bb->index;
   entry = bb_original.find (&key);
   if (entry)
-    return BASIC_BLOCK (entry->index2);
+    return BASIC_BLOCK_FOR_FN (cfun, entry->index2);
   else
     return NULL;
 }
@@ -1164,7 +1164,7 @@ get_bb_copy (basic_block bb)
   key.index1 = bb->index;
   entry = bb_copy.find (&key);
   if (entry)
-    return BASIC_BLOCK (entry->index2);
+    return BASIC_BLOCK_FOR_FN (cfun, entry->index2);
   else
     return NULL;
 }
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 853ace2..d98ac5b 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -512,7 +512,7 @@ add_scope_conflicts (void)
       for (i = 0; i < n_bbs; i++)
 	{
 	  bitmap active;
-	  bb = BASIC_BLOCK (rpo[i]);
+	  bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]);
 	  active = (bitmap)bb->aux;
 	  add_scope_conflicts_1 (bb, work, false);
 	  if (bitmap_ior_into (active, work))
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index 2535c90..0cd6af0 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -106,7 +106,7 @@ verify_flow_info (void)
   FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb, NULL, next_bb)
     {
       if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun)
-	  && bb != BASIC_BLOCK (bb->index))
+	  && bb != BASIC_BLOCK_FOR_FN (cfun, bb->index))
 	{
 	  error ("bb %d on wrong place", bb->index);
 	  err = 1;
diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
index 0b12e73..6245605 100644
--- a/gcc/cfgloop.c
+++ b/gcc/cfgloop.c
@@ -439,7 +439,7 @@ flow_loops_find (struct loops *loops)
   auto_vec<loop_p> larray (loops->larray->length ());
   for (b = 0; b < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; b++)
     {
-      basic_block header = BASIC_BLOCK (rc_order[b]);
+      basic_block header = BASIC_BLOCK_FOR_FN (cfun, rc_order[b]);
       if (bb_loop_header_p (header))
 	{
 	  struct loop *loop;
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 045d78b..de110f4 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -4831,7 +4831,7 @@ rtl_flow_call_edges_add (sbitmap blocks)
 
   for (i = NUM_FIXED_BLOCKS; i < last_bb; i++)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
       rtx insn;
       rtx prev_insn;
 
diff --git a/gcc/config/mips/mips.c b/gcc/config/mips/mips.c
index 36ba6df..7903443 100644
--- a/gcc/config/mips/mips.c
+++ b/gcc/config/mips/mips.c
@@ -15079,7 +15079,7 @@ r10k_insert_cache_barriers (void)
   n = pre_and_rev_post_order_compute (NULL, rev_post_order, false);
   for (i = 0; i < n; i++)
     {
-      bb = BASIC_BLOCK (rev_post_order[i]);
+      bb = BASIC_BLOCK_FOR_FN (cfun, rev_post_order[i]);
 
       /* If this block is only reached by unconditional edges, and if the
 	 source of every edge is protected, the beginning of the block is
diff --git a/gcc/config/s390/s390.c b/gcc/config/s390/s390.c
index a435b2d..fcd7532 100644
--- a/gcc/config/s390/s390.c
+++ b/gcc/config/s390/s390.c
@@ -7982,7 +7982,7 @@ s390_optimize_nonescaping_tx (void)
 
   for (bb_index = 0; bb_index < n_basic_blocks_for_fn (cfun); bb_index++)
     {
-      bb = BASIC_BLOCK (bb_index);
+      bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
 
       if (!bb)
 	continue;
diff --git a/gcc/config/spu/spu.c b/gcc/config/spu/spu.c
index 5b8aef1..a658ee6 100644
--- a/gcc/config/spu/spu.c
+++ b/gcc/config/spu/spu.c
@@ -2490,7 +2490,7 @@ spu_machine_dependent_reorg (void)
 
   for (i = n_basic_blocks_for_fn (cfun) - 1; i >= 0; i--)
     {
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       branch = 0;
       if (spu_bb_info[i].prop_jump)
 	{
diff --git a/gcc/cse.c b/gcc/cse.c
index d5357f0..215beb0 100644
--- a/gcc/cse.c
+++ b/gcc/cse.c
@@ -6564,7 +6564,7 @@ cse_main (rtx f ATTRIBUTE_UNUSED, int nregs)
 	 processed before.  */
       do
 	{
-	  bb = BASIC_BLOCK (rc_order[i++]);
+	  bb = BASIC_BLOCK_FOR_FN (cfun, rc_order[i++]);
 	}
       while (bitmap_bit_p (cse_visited_basic_blocks, bb->index)
 	     && i < n_blocks);
diff --git a/gcc/dce.c b/gcc/dce.c
index 5c11cbe..07d31f7 100644
--- a/gcc/dce.c
+++ b/gcc/dce.c
@@ -1065,7 +1065,7 @@ fast_dce (bool word_level)
       for (i = 0; i < n_blocks; i++)
 	{
 	  int index = postorder[i];
-	  basic_block bb = BASIC_BLOCK (index);
+	  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, index);
 	  bool local_changed;
 
 	  if (index < NUM_FIXED_BLOCKS)
diff --git a/gcc/df-core.c b/gcc/df-core.c
index 4fb92a9..87419c2 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -520,7 +520,7 @@ df_set_blocks (bitmap blocks)
 
 		  EXECUTE_IF_SET_IN_BITMAP (&diff, 0, bb_index, bi)
 		    {
-		      basic_block bb = BASIC_BLOCK (bb_index);
+		      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
 		      if (bb)
 			{
 			  void *bb_info = df_get_bb_info (dflow, bb_index);
@@ -933,7 +933,7 @@ df_worklist_propagate_forward (struct dataflow *dataflow,
 {
   edge e;
   edge_iterator ei;
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   bool changed = !age;
 
   /*  Calculate <conf_op> of incoming edges.  */
@@ -978,7 +978,7 @@ df_worklist_propagate_backward (struct dataflow *dataflow,
 {
   edge e;
   edge_iterator ei;
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   bool changed = !age;
 
   /*  Calculate <conf_op> of incoming edges.  */
@@ -1067,7 +1067,7 @@ df_worklist_dataflow_doublequeue (struct dataflow *dataflow,
 
 	  bitmap_clear_bit (pending, index);
 	  bb_index = blocks_in_postorder[index];
-	  bb = BASIC_BLOCK (bb_index);
+	  bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
 	  prev_age = last_visit_age[index];
 	  if (dir == DF_FORWARD)
 	    changed = df_worklist_propagate_forward (dataflow, bb_index,
@@ -1086,7 +1086,7 @@ df_worklist_dataflow_doublequeue (struct dataflow *dataflow,
       bitmap_clear (worklist);
     }
   for (i = 0; i < n_blocks; i++)
-    BASIC_BLOCK (blocks_in_postorder[i])->aux = NULL;
+    BASIC_BLOCK_FOR_FN (cfun, blocks_in_postorder[i])->aux = NULL;
 
   BITMAP_FREE (worklist);
   BITMAP_FREE (pending);
@@ -1631,7 +1631,7 @@ df_bb_replace (int old_index, basic_block new_block)
     fprintf (dump_file, "shoving block %d into %d\n", new_block_index, old_index);
 
   gcc_assert (df);
-  gcc_assert (BASIC_BLOCK (old_index) == NULL);
+  gcc_assert (BASIC_BLOCK_FOR_FN (cfun, old_index) == NULL);
 
   for (p = 0; p < df->num_problems_defined; p++)
     {
@@ -1647,7 +1647,7 @@ df_bb_replace (int old_index, basic_block new_block)
   df_clear_bb_dirty (new_block);
   SET_BASIC_BLOCK_FOR_FN (cfun, old_index, new_block);
   new_block->index = old_index;
-  df_set_bb_dirty (BASIC_BLOCK (old_index));
+  df_set_bb_dirty (BASIC_BLOCK_FOR_FN (cfun, old_index));
   SET_BASIC_BLOCK_FOR_FN (cfun, new_block_index, NULL);
 }
 
@@ -1659,7 +1659,7 @@ df_bb_replace (int old_index, basic_block new_block)
 void
 df_bb_delete (int bb_index)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   int i;
 
   if (!df)
@@ -2045,7 +2045,7 @@ df_dump_region (FILE *file)
 
       EXECUTE_IF_SET_IN_BITMAP (df->blocks_to_analyze, 0, bb_index, bi)
 	{
-	  basic_block bb = BASIC_BLOCK (bb_index);
+	  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
 	  dump_bb (file, bb, 0, TDF_DETAILS);
 	}
       fprintf (file, "\n");
diff --git a/gcc/df-problems.c b/gcc/df-problems.c
index c6349c8..2b42b48 100644
--- a/gcc/df-problems.c
+++ b/gcc/df-problems.c
@@ -353,7 +353,7 @@ df_rd_bb_local_compute_process_def (struct df_rd_bb_info *bb_info,
 static void
 df_rd_bb_local_compute (unsigned int bb_index)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_rd_bb_info *bb_info = df_rd_get_bb_info (bb_index);
   rtx insn;
 
@@ -835,7 +835,7 @@ df_lr_reset (bitmap all_blocks)
 static void
 df_lr_bb_local_compute (unsigned int bb_index)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_lr_bb_info *bb_info = df_lr_get_bb_info (bb_index);
   rtx insn;
   df_ref *def_rec;
@@ -1462,7 +1462,7 @@ df_live_reset (bitmap all_blocks)
 static void
 df_live_bb_local_compute (unsigned int bb_index)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_live_bb_info *bb_info = df_live_get_bb_info (bb_index);
   rtx insn;
   df_ref *def_rec;
@@ -1987,7 +1987,7 @@ df_chain_remove_problem (void)
       rtx insn;
       df_ref *def_rec;
       df_ref *use_rec;
-      basic_block bb = BASIC_BLOCK (bb_index);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
 
       if (df_chain_problem_p (DF_DU_CHAIN))
 	for (def_rec = df_get_artificial_defs (bb->index); *def_rec; def_rec++)
@@ -2105,7 +2105,7 @@ df_chain_create_bb_process_use (bitmap local_rd,
 static void
 df_chain_create_bb (unsigned int bb_index)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_rd_bb_info *bb_info = df_rd_get_bb_info (bb_index);
   rtx insn;
   bitmap_head cpy;
@@ -2531,7 +2531,7 @@ df_word_lr_mark_ref (df_ref ref, bool is_set, regset live)
 static void
 df_word_lr_bb_local_compute (unsigned int bb_index)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_word_lr_bb_info *bb_info = df_word_lr_get_bb_info (bb_index);
   rtx insn;
   df_ref *def_rec;
@@ -3154,7 +3154,7 @@ static void
 df_note_bb_compute (unsigned int bb_index,
 		    bitmap live, bitmap do_not_gen, bitmap artificial_uses)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   rtx insn;
   df_ref *def_rec;
   df_ref *use_rec;
@@ -4271,7 +4271,7 @@ df_md_bb_local_compute_process_def (struct df_md_bb_info *bb_info,
 static void
 df_md_bb_local_compute (unsigned int bb_index)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_md_bb_info *bb_info = df_md_get_bb_info (bb_index);
   rtx insn;
 
@@ -4327,7 +4327,7 @@ df_md_local_compute (bitmap all_blocks)
       bitmap kill = &df_md_get_bb_info (bb_index)->kill;
       EXECUTE_IF_SET_IN_BITMAP (&frontiers[bb_index], 0, df_bb_index, bi2)
 	{
-	  basic_block bb = BASIC_BLOCK (df_bb_index);
+	  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, df_bb_index);
 	  if (bitmap_bit_p (all_blocks, df_bb_index))
 	    bitmap_ior_and_into (&df_md_get_bb_info (df_bb_index)->init, kill,
 				 df_get_live_in (bb));
@@ -4360,7 +4360,7 @@ df_md_reset (bitmap all_blocks)
 static bool
 df_md_transfer_function (int bb_index)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_md_bb_info *bb_info = df_md_get_bb_info (bb_index);
   bitmap in = &bb_info->in;
   bitmap out = &bb_info->out;
diff --git a/gcc/df-scan.c b/gcc/df-scan.c
index eb7e4d4..5f0ba4a 100644
--- a/gcc/df-scan.c
+++ b/gcc/df-scan.c
@@ -669,8 +669,8 @@ df_scan_blocks (void)
   df_record_entry_block_defs (df->entry_block_defs);
   df_get_exit_block_use_set (df->exit_block_uses);
   df_record_exit_block_uses (df->exit_block_uses);
-  df_set_bb_dirty (BASIC_BLOCK (ENTRY_BLOCK));
-  df_set_bb_dirty (BASIC_BLOCK (EXIT_BLOCK));
+  df_set_bb_dirty (BASIC_BLOCK_FOR_FN (cfun, ENTRY_BLOCK));
+  df_set_bb_dirty (BASIC_BLOCK_FOR_FN (cfun, EXIT_BLOCK));
 
   /* Regular blocks */
   FOR_EACH_BB (bb)
@@ -1637,7 +1637,7 @@ df_reorganize_refs_by_reg_by_insn (struct df_ref_info *ref_info,
 
   EXECUTE_IF_SET_IN_BITMAP (df->blocks_to_analyze, 0, bb_index, bi)
     {
-      basic_block bb = BASIC_BLOCK (bb_index);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
       rtx insn;
       df_ref *ref_rec;
 
@@ -1691,7 +1691,7 @@ df_reorganize_refs_by_reg_by_insn (struct df_ref_info *ref_info,
 
   EXECUTE_IF_SET_IN_BITMAP (df->blocks_to_analyze, 0, bb_index, bi)
     {
-      basic_block bb = BASIC_BLOCK (bb_index);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
       rtx insn;
       df_ref *ref_rec;
 
@@ -1876,7 +1876,9 @@ df_reorganize_refs_by_insn (struct df_ref_info *ref_info,
 
       EXECUTE_IF_SET_IN_BITMAP (df->blocks_to_analyze, 0, index, bi)
 	{
-	  offset = df_reorganize_refs_by_insn_bb (BASIC_BLOCK (index), offset, ref_info,
+	  offset = df_reorganize_refs_by_insn_bb (BASIC_BLOCK_FOR_FN (cfun,
+								      index),
+						  offset, ref_info,
 						  include_defs, include_uses,
 						  include_eq_uses);
 	}
@@ -3616,7 +3618,7 @@ df_bb_refs_collect (struct df_collection_rec *collection_rec, basic_block bb)
 void
 df_bb_refs_record (int bb_index, bool scan_insns)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   rtx insn;
   int luid = 0;
 
@@ -3890,7 +3892,9 @@ df_record_entry_block_defs (bitmap entry_block_defs)
   df_entry_block_defs_collect (&collection_rec, entry_block_defs);
 
   /* Process bb_refs chain */
-  df_refs_add_to_chains (&collection_rec, BASIC_BLOCK (ENTRY_BLOCK), NULL,
+  df_refs_add_to_chains (&collection_rec,
+			 BASIC_BLOCK_FOR_FN (cfun, ENTRY_BLOCK),
+			 NULL,
 			 copy_defs);
 }
 
@@ -3929,7 +3933,7 @@ df_update_entry_block_defs (void)
     {
       df_record_entry_block_defs (&refs);
       bitmap_copy (df->entry_block_defs, &refs);
-      df_set_bb_dirty (BASIC_BLOCK (ENTRY_BLOCK));
+      df_set_bb_dirty (BASIC_BLOCK_FOR_FN (cfun, ENTRY_BLOCK));
     }
   bitmap_clear (&refs);
 }
@@ -4061,7 +4065,9 @@ df_record_exit_block_uses (bitmap exit_block_uses)
   df_exit_block_uses_collect (&collection_rec, exit_block_uses);
 
   /* Process bb_refs chain */
-  df_refs_add_to_chains (&collection_rec, BASIC_BLOCK (EXIT_BLOCK), NULL,
+  df_refs_add_to_chains (&collection_rec,
+			 BASIC_BLOCK_FOR_FN (cfun, EXIT_BLOCK),
+			 NULL,
 			 copy_uses);
 }
 
@@ -4100,7 +4106,7 @@ df_update_exit_block_uses (void)
     {
       df_record_exit_block_uses (&refs);
       bitmap_copy (df->exit_block_uses,& refs);
-      df_set_bb_dirty (BASIC_BLOCK (EXIT_BLOCK));
+      df_set_bb_dirty (BASIC_BLOCK_FOR_FN (cfun, EXIT_BLOCK));
     }
   bitmap_clear (&refs);
 }
diff --git a/gcc/dominance.c b/gcc/dominance.c
index 5ece3f6..e9d2265 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -884,10 +884,10 @@ nearest_common_dominator_for_set (enum cdi_direction dir, bitmap blocks)
   basic_block dom;
 
   first = bitmap_first_set_bit (blocks);
-  dom = BASIC_BLOCK (first);
+  dom = BASIC_BLOCK_FOR_FN (cfun, first);
   EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i, bi)
-    if (dom != BASIC_BLOCK (i))
-      dom = nearest_common_dominator (dir, dom, BASIC_BLOCK (i));
+    if (dom != BASIC_BLOCK_FOR_FN (cfun, i))
+      dom = nearest_common_dominator (dir, dom, BASIC_BLOCK_FOR_FN (cfun, i));
 
   return dom;
 }
diff --git a/gcc/gcse.c b/gcc/gcse.c
index 2c1ca21..8928c85 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -3337,7 +3337,7 @@ hoist_code (void)
 		  data->max_reg_pressure[pressure_class] += nregs;
 		  EXECUTE_IF_SET_IN_BITMAP (hoisted_bbs, 0, k, bi)
 		    {
-		      data = BB_DATA (BASIC_BLOCK (k));
+		      data = BB_DATA (BASIC_BLOCK_FOR_FN (cfun, k));
 		      data->max_reg_pressure[pressure_class] += nregs;
 		    }
 		}
@@ -3348,7 +3348,7 @@ hoist_code (void)
 		     hoisted.  */
 		  EXECUTE_IF_SET_IN_BITMAP (hoisted_bbs, 0, k, bi)
 		    {
-		      data = BB_DATA (BASIC_BLOCK (k));
+		      data = BB_DATA (BASIC_BLOCK_FOR_FN (cfun, k));
 		      bitmap_copy (data->live_in, data->backup);
 		      data->max_reg_pressure[pressure_class]
 			  = data->old_pressure;
diff --git a/gcc/graph.c b/gcc/graph.c
index b75135a..3f02cab 100644
--- a/gcc/graph.c
+++ b/gcc/graph.c
@@ -164,7 +164,7 @@ draw_cfg_nodes_no_loops (pretty_printer *pp, struct function *fun)
   for (i = n_basic_blocks_for_fn (fun) - n;
        i < n_basic_blocks_for_fn (fun); i++)
     {
-      basic_block bb = BASIC_BLOCK (rpo[i]);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]);
       draw_cfg_node (pp, fun->funcdef_no, bb);
       bitmap_set_bit (visited, bb->index);
     }
diff --git a/gcc/ipa-inline-analysis.c b/gcc/ipa-inline-analysis.c
index ad6fe8f..e4ef9d4 100644
--- a/gcc/ipa-inline-analysis.c
+++ b/gcc/ipa-inline-analysis.c
@@ -2152,7 +2152,7 @@ param_change_prob (gimple stmt, int i)
 	max = 1;
 
       EXECUTE_IF_SET_IN_BITMAP (info.bb_set, 0, index, bi)
-	max = MIN (max, BASIC_BLOCK (index)->frequency);
+	max = MIN (max, BASIC_BLOCK_FOR_FN (cfun, index)->frequency);
 
       BITMAP_FREE (info.bb_set);
       if (max < bb->frequency)
@@ -2408,7 +2408,7 @@ estimate_function_body_sizes (struct cgraph_node *node, bool early)
   nblocks = pre_and_rev_post_order_compute (NULL, order, false);
   for (n = 0; n < nblocks; n++)
     {
-      bb = BASIC_BLOCK (order[n]);
+      bb = BASIC_BLOCK_FOR_FN (cfun, order[n]);
       freq = compute_call_stmt_bb_frequency (node->decl, bb);
 
       /* TODO: Obviously predicates can be propagated down across CFG.  */
diff --git a/gcc/ipa-split.c b/gcc/ipa-split.c
index d2e2d6f..eca86da 100644
--- a/gcc/ipa-split.c
+++ b/gcc/ipa-split.c
@@ -362,7 +362,8 @@ dominated_by_forbidden (basic_block bb)
 
   EXECUTE_IF_SET_IN_BITMAP (forbidden_dominators, 1, dom_bb, bi)
     {
-      if (dominated_by_p (CDI_DOMINATORS, bb, BASIC_BLOCK (dom_bb)))
+      if (dominated_by_p (CDI_DOMINATORS, bb,
+			  BASIC_BLOCK_FOR_FN (cfun, dom_bb)))
 	return true;
     }
 
diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
index 557915f..9910b4e 100644
--- a/gcc/loop-unroll.c
+++ b/gcc/loop-unroll.c
@@ -2370,7 +2370,7 @@ apply_opt_in_copies (struct opt_info *opt_info,
 
   for (i = opt_info->first_new_block; i < (unsigned) last_basic_block; i++)
     {
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       orig_bb = get_bb_original (bb);
 
       /* bb->aux holds position in copy sequence initialized by
@@ -2446,7 +2446,7 @@ apply_opt_in_copies (struct opt_info *opt_info,
      get_bb_copy (get_bb_original (bb)) == bb.  */
   for (i = opt_info->first_new_block; i < (unsigned) last_basic_block; i++)
     {
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       orig_bb = get_bb_original (bb);
       if (get_bb_copy (orig_bb) != bb)
 	continue;
diff --git a/gcc/lower-subreg.c b/gcc/lower-subreg.c
index e67bc35..6c9d622 100644
--- a/gcc/lower-subreg.c
+++ b/gcc/lower-subreg.c
@@ -1647,7 +1647,7 @@ decompose_multiword_subregs (bool decompose_copies)
 	  rtx insn, end;
 	  edge fallthru;
 
-	  bb = BASIC_BLOCK (i);
+	  bb = BASIC_BLOCK_FOR_FN (cfun, i);
 	  insn = BB_HEAD (bb);
 	  end = BB_END (bb);
 
diff --git a/gcc/lra-lives.c b/gcc/lra-lives.c
index efc19f2..d2082fe 100644
--- a/gcc/lra-lives.c
+++ b/gcc/lra-lives.c
@@ -1001,7 +1001,7 @@ lra_create_live_ranges (bool all_p)
   lra_assert (n_blocks_inverted == n_basic_blocks_for_fn (cfun));
   for (i = n_blocks_inverted - 1; i >= 0; --i)
     {
-      bb = BASIC_BLOCK (post_order_rev_cfg[i]);
+      bb = BASIC_BLOCK_FOR_FN (cfun, post_order_rev_cfg[i]);
       if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb
 	  == ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	continue;
diff --git a/gcc/predict.c b/gcc/predict.c
index e959a3b..1dec4dc 100644
--- a/gcc/predict.c
+++ b/gcc/predict.c
@@ -2596,7 +2596,7 @@ propagate_freq (basic_block head, bitmap tovisit)
       edge_iterator ei;
       int count = 0;
 
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
 
       FOR_EACH_EDGE (e, ei, bb->preds)
 	{
diff --git a/gcc/regrename.c b/gcc/regrename.c
index 5e86fa5..ac8b0f3 100644
--- a/gcc/regrename.c
+++ b/gcc/regrename.c
@@ -696,7 +696,7 @@ regrename_analyze (bitmap bb_mask)
 
   for (i = 0; i < n_bbs; i++)
     {
-      basic_block bb1 = BASIC_BLOCK (inverse_postorder[i]);
+      basic_block bb1 = BASIC_BLOCK_FOR_FN (cfun, inverse_postorder[i]);
       struct bb_rename_info *this_info;
       bool success;
       edge e;
diff --git a/gcc/regstat.c b/gcc/regstat.c
index 85678a7..48d27c3 100644
--- a/gcc/regstat.c
+++ b/gcc/regstat.c
@@ -120,7 +120,7 @@ regstat_bb_compute_ri (unsigned int bb_index,
 		       bitmap local_live, bitmap local_processed,
 		       int *local_live_last_luid)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   rtx insn;
   df_ref *def_rec;
   df_ref *use_rec;
@@ -440,7 +440,7 @@ regstat_get_setjmp_crosses (void)
 static void
 regstat_bb_compute_calls_crossed (unsigned int bb_index, bitmap live)
 {
-  basic_block bb = BASIC_BLOCK (bb_index);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   rtx insn;
   df_ref *def_rec;
   df_ref *use_rec;
diff --git a/gcc/resource.c b/gcc/resource.c
index 4609c3a..3106a09 100644
--- a/gcc/resource.c
+++ b/gcc/resource.c
@@ -918,7 +918,8 @@ mark_target_live_regs (rtx insns, rtx target, struct resources *res)
 	 information, we can get it from there unless the insn at the
 	 start of the basic block has been deleted.  */
       if (tinfo && tinfo->block != -1
-	  && ! INSN_DELETED_P (BB_HEAD (BASIC_BLOCK (tinfo->block))))
+	  && ! INSN_DELETED_P (BB_HEAD (BASIC_BLOCK_FOR_FN (cfun,
+							    tinfo->block))))
 	b = tinfo->block;
     }
 
@@ -958,7 +959,7 @@ mark_target_live_regs (rtx insns, rtx target, struct resources *res)
      to use the LR problem.  Otherwise, we must assume everything is live.  */
   if (b != -1)
     {
-      regset regs_live = DF_LR_IN (BASIC_BLOCK (b));
+      regset regs_live = DF_LR_IN (BASIC_BLOCK_FOR_FN (cfun, b));
       rtx start_insn, stop_insn;
 
       /* Compute hard regs live at start of block.  */
@@ -967,7 +968,7 @@ mark_target_live_regs (rtx insns, rtx target, struct resources *res)
       /* Get starting and ending insn, handling the case where each might
 	 be a SEQUENCE.  */
       start_insn = (b == ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb->index ?
-		    insns : BB_HEAD (BASIC_BLOCK (b)));
+		    insns : BB_HEAD (BASIC_BLOCK_FOR_FN (cfun, b)));
       stop_insn = target;
 
       if (NONJUMP_INSN_P (start_insn)
diff --git a/gcc/sched-ebb.c b/gcc/sched-ebb.c
index 955501a..73af0a7 100644
--- a/gcc/sched-ebb.c
+++ b/gcc/sched-ebb.c
@@ -737,7 +737,7 @@ ebb_fix_recovery_cfg (int bbi ATTRIBUTE_UNUSED, int jump_bbi,
   gcc_assert (last_bb->index != bbi);
 
   if (jump_bb_nexti == last_bb->index)
-    last_bb = BASIC_BLOCK (jump_bbi);
+    last_bb = BASIC_BLOCK_FOR_FN (cfun, jump_bbi);
 }
 
 #endif /* INSN_SCHEDULING */
diff --git a/gcc/sched-int.h b/gcc/sched-int.h
index 84b5cb5..22ece1d 100644
--- a/gcc/sched-int.h
+++ b/gcc/sched-int.h
@@ -1416,8 +1416,9 @@ extern int *containing_rgn;
 /* The mapping from ebb to block.  */
 extern int *ebb_head;
 #define BB_TO_BLOCK(ebb) (rgn_bb_table[ebb_head[ebb]])
-#define EBB_FIRST_BB(ebb) BASIC_BLOCK (BB_TO_BLOCK (ebb))
-#define EBB_LAST_BB(ebb) BASIC_BLOCK (rgn_bb_table[ebb_head[ebb + 1] - 1])
+#define EBB_FIRST_BB(ebb) BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (ebb))
+#define EBB_LAST_BB(ebb) \
+  BASIC_BLOCK_FOR_FN (cfun, rgn_bb_table[ebb_head[ebb + 1] - 1])
 #define INSN_BB(INSN) (BLOCK_TO_BB (BLOCK_NUM (INSN)))
 
 extern int current_nr_blocks;
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index 1663e2f..2d8b939 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -401,7 +401,8 @@ debug_region (int rgn)
 
   for (bb = 0; bb < rgn_table[rgn].rgn_nr_blocks; bb++)
     {
-      dump_bb (stderr, BASIC_BLOCK (rgn_bb_table[current_blocks + bb]),
+      dump_bb (stderr,
+	       BASIC_BLOCK_FOR_FN (cfun, rgn_bb_table[current_blocks + bb]),
 	       0, TDF_SLIM | TDF_BLOCKS);
       fprintf (stderr, "\n");
     }
@@ -440,7 +441,7 @@ dump_region_dot (FILE *f, int rgn)
       edge e;
       edge_iterator ei;
       int src_bb_num = rgn_bb_table[current_blocks + i];
-      basic_block bb = BASIC_BLOCK (src_bb_num);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, src_bb_num);
 
       FOR_EACH_EDGE (e, ei, bb->succs)
         if (bb_in_region_p (e->dest->index, rgn))
@@ -554,7 +555,7 @@ too_large (int block, int *num_bbs, int *num_insns)
 {
   (*num_bbs)++;
   (*num_insns) += (common_sched_info->estimate_number_of_insns
-                   (BASIC_BLOCK (block)));
+                   (BASIC_BLOCK_FOR_FN (cfun, block)));
 
   return ((*num_bbs > PARAM_VALUE (PARAM_MAX_SCHED_REGION_BLOCKS))
 	  || (*num_insns > PARAM_VALUE (PARAM_MAX_SCHED_REGION_INSNS)));
@@ -948,7 +949,8 @@ haifa_find_rgns (void)
 		  edge e;
 		  child = queue[++head];
 
-		  FOR_EACH_EDGE (e, ei, BASIC_BLOCK (child)->preds)
+		  FOR_EACH_EDGE (e, ei,
+				 BASIC_BLOCK_FOR_FN (cfun, child)->preds)
 		    {
 		      node = e->src->index;
 
@@ -1005,7 +1007,9 @@ haifa_find_rgns (void)
 			  CONTAINING_RGN (child) = nr_regions;
 			  queue[head] = queue[tail--];
 
-			  FOR_EACH_EDGE (e, ei, BASIC_BLOCK (child)->succs)
+			  FOR_EACH_EDGE (e, ei,
+					 BASIC_BLOCK_FOR_FN (cfun,
+							     child)->succs)
 			    if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 			      --degree[e->dest->index];
 			}
@@ -1200,7 +1204,7 @@ extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 	    {
 	      int hdr = -1;
 
-	      FOR_EACH_EDGE (e, ei, BASIC_BLOCK (bbn)->preds)
+	      FOR_EACH_EDGE (e, ei, BASIC_BLOCK_FOR_FN (cfun, bbn)->preds)
 		{
 		  int predn = e->src->index;
 
@@ -1304,7 +1308,7 @@ extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 	      CONTAINING_RGN (bbn) = nr_regions;
 	      BLOCK_TO_BB (bbn) = 0;
 
-	      FOR_EACH_EDGE (e, ei, BASIC_BLOCK (bbn)->succs)
+	      FOR_EACH_EDGE (e, ei, BASIC_BLOCK_FOR_FN (cfun, bbn)->succs)
 		if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 		  degree[e->dest->index]--;
 
@@ -1361,7 +1365,8 @@ extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 
 		      idx++;
 
-		      FOR_EACH_EDGE (e, ei, BASIC_BLOCK (succn)->succs)
+		      FOR_EACH_EDGE (e, ei,
+				     BASIC_BLOCK_FOR_FN (cfun, succn)->succs)
 			if (e->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))
 			  degree[e->dest->index]--;
 		    }
@@ -1420,7 +1425,8 @@ compute_dom_prob_ps (int bb)
   /* Initialize dom[bb] to '111..1'.  */
   bitmap_ones (dom[bb]);
 
-  FOR_EACH_EDGE (in_edge, in_ei, BASIC_BLOCK (BB_TO_BLOCK (bb))->preds)
+  FOR_EACH_EDGE (in_edge, in_ei,
+		 BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (bb))->preds)
     {
       int pred_bb;
       edge out_edge;
@@ -1838,7 +1844,8 @@ update_live (rtx insn, int src)
   (bb_from == bb_to							\
    || IS_RGN_ENTRY (bb_from)						\
    || (bitmap_bit_p (ancestor_edges[bb_to],					\
-	 EDGE_TO_BIT (single_pred_edge (BASIC_BLOCK (BB_TO_BLOCK (bb_from)))))))
+	 EDGE_TO_BIT (single_pred_edge (BASIC_BLOCK_FOR_FN (cfun, \
+							    BB_TO_BLOCK (bb_from)))))))
 
 /* Turns on the fed_by_spec_load flag for insns fed by load_insn.  */
 
@@ -2655,7 +2662,7 @@ deps_join (struct deps_desc *succ_deps, struct deps_desc *pred_deps)
 static void
 propagate_deps (int bb, struct deps_desc *pred_deps)
 {
-  basic_block block = BASIC_BLOCK (BB_TO_BLOCK (bb));
+  basic_block block = BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (bb));
   edge_iterator ei;
   edge e;
 
@@ -2864,7 +2871,8 @@ sched_is_disabled_for_current_region_p (void)
   int bb;
 
   for (bb = 0; bb < current_nr_blocks; bb++)
-    if (!(BASIC_BLOCK (BB_TO_BLOCK (bb))->flags & BB_DISABLE_SCHEDULE))
+    if (!(BASIC_BLOCK_FOR_FN (cfun,
+			      BB_TO_BLOCK (bb))->flags & BB_DISABLE_SCHEDULE))
       return false;
 
   return true;
diff --git a/gcc/sched-vis.c b/gcc/sched-vis.c
index a965c4d..57b28a0 100644
--- a/gcc/sched-vis.c
+++ b/gcc/sched-vis.c
@@ -873,7 +873,7 @@ extern void debug_bb_n_slim (int);
 DEBUG_FUNCTION void
 debug_bb_n_slim (int n)
 {
-  basic_block bb = BASIC_BLOCK (n);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, n);
   debug_bb_slim (bb);
 }
 
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index 7dfc703..da84cce 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -3075,7 +3075,7 @@ sel_finish_global_and_expr (void)
     bbs.create (current_nr_blocks);
 
     for (i = 0; i < current_nr_blocks; i++)
-      bbs.quick_push (BASIC_BLOCK (BB_TO_BLOCK (i)));
+      bbs.quick_push (BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (i)));
 
     /* Clear AV_SETs and INSN_EXPRs.  */
     {
@@ -3627,7 +3627,7 @@ verify_backedges (void)
       edge_iterator ei;
 
       for (i = 0; i < current_nr_blocks; i++)
-        FOR_EACH_EDGE (e, ei, BASIC_BLOCK (BB_TO_BLOCK (i))->succs)
+        FOR_EACH_EDGE (e, ei, BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (i))->succs)
           if (in_current_region_p (e->dest)
               && BLOCK_TO_BB (e->dest->index) < i)
             n++;
@@ -3897,7 +3897,7 @@ purge_empty_blocks (void)
   /* Do not attempt to delete the first basic block in the region.  */
   for (i = 1; i < current_nr_blocks; )
     {
-      basic_block b = BASIC_BLOCK (BB_TO_BLOCK (i));
+      basic_block b = BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (i));
 
       if (maybe_tidy_empty_bb (b))
 	continue;
@@ -6346,7 +6346,7 @@ sel_remove_loop_preheader (void)
   /* Add blocks that aren't within the current loop to PREHEADER_BLOCKS.  */
   for (i = 0; i < RGN_NR_BLOCKS (cur_rgn); i++)
     {
-      bb = BASIC_BLOCK (BB_TO_BLOCK (i));
+      bb = BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (i));
 
       /* If the basic block belongs to region, but doesn't belong to
 	 corresponding loop, then it should be a preheader.  */
diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c
index 1195f7e..3e1fd96 100644
--- a/gcc/sel-sched.c
+++ b/gcc/sel-sched.c
@@ -4903,7 +4903,8 @@ remove_insns_that_need_bookkeeping (fence_t fence, av_set_t *av_ptr)
 	  && (EXPR_SPEC (expr)
 	      || !EXPR_ORIG_BB_INDEX (expr)
 	      || !dominated_by_p (CDI_DOMINATORS,
-				  BASIC_BLOCK (EXPR_ORIG_BB_INDEX (expr)),
+				  BASIC_BLOCK_FOR_FN (cfun,
+						      EXPR_ORIG_BB_INDEX (expr)),
 				  BLOCK_FOR_INSN (FENCE_INSN (fence)))))
 	{
           if (sched_verbose >= 4)
@@ -6886,7 +6887,7 @@ current_region_empty_p (void)
 {
   int i;
   for (i = 0; i < current_nr_blocks; i++)
-    if (! sel_bb_empty_p (BASIC_BLOCK (BB_TO_BLOCK (i))))
+    if (! sel_bb_empty_p (BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (i))))
       return false;
 
   return true;
@@ -6945,7 +6946,7 @@ sel_region_init (int rgn)
   bbs.create (current_nr_blocks);
 
   for (i = 0; i < current_nr_blocks; i++)
-    bbs.quick_push (BASIC_BLOCK (BB_TO_BLOCK (i)));
+    bbs.quick_push (BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (i)));
 
   sel_init_bbs (bbs);
 
@@ -6980,13 +6981,14 @@ sel_region_init (int rgn)
      compute_live for the first insn of the loop.  */
   if (current_loop_nest)
     {
-      int header = (sel_is_loop_preheader_p (BASIC_BLOCK (BB_TO_BLOCK (0)))
-                    ? 1
-                    : 0);
+      int header =
+	(sel_is_loop_preheader_p (BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (0)))
+	 ? 1
+	 : 0);
 
       if (current_nr_blocks == header + 1)
         update_liveness_on_insn
-          (sel_bb_head (BASIC_BLOCK (BB_TO_BLOCK (header))));
+          (sel_bb_head (BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (header))));
     }
 
   /* Set hooks so that no newly generated insn will go out unnoticed.  */
@@ -7024,7 +7026,7 @@ simplify_changed_insns (void)
 
   for (i = 0; i < current_nr_blocks; i++)
     {
-      basic_block bb = BASIC_BLOCK (BB_TO_BLOCK (i));
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, BB_TO_BLOCK (i));
       rtx insn;
 
       FOR_BB_INSNS (bb, insn)
diff --git a/gcc/trans-mem.c b/gcc/trans-mem.c
index b2adc3d..39715b8 100644
--- a/gcc/trans-mem.c
+++ b/gcc/trans-mem.c
@@ -2993,7 +2993,7 @@ execute_tm_mark (void)
 		  && sub & GTMA_MAY_ENTER_IRREVOCABLE)
 		continue;
 	    }
-	  expand_block_tm (r, BASIC_BLOCK (i));
+	  expand_block_tm (r, BASIC_BLOCK_FOR_FN (cfun, i));
 	}
     }
 
@@ -3184,7 +3184,7 @@ execute_tm_edges (void)
 
   FOR_EACH_VEC_ELT (bb_regions, i, r)
     if (r != NULL)
-      expand_block_edges (r, BASIC_BLOCK (i));
+      expand_block_edges (r, BASIC_BLOCK_FOR_FN (cfun, i));
 
   bb_regions.release ();
 
@@ -3700,7 +3700,7 @@ tm_memopt_compute_antic (struct tm_region *region,
       unsigned int i;
       bitmap_iterator bi;
       EXECUTE_IF_SET_IN_BITMAP (region->exit_blocks, 0, i, bi)
-	BB_VISITED_P (BASIC_BLOCK (i)) = true;
+	BB_VISITED_P (BASIC_BLOCK_FOR_FN (cfun, i)) = true;
     }
 
   qin = worklist;
@@ -4572,7 +4572,8 @@ ipa_tm_scan_irr_function (struct cgraph_node *node, bool for_clone)
       unsigned i;
 
       EXECUTE_IF_SET_IN_BITMAP (new_irr, 0, i, bmi)
-	ipa_tm_decrement_clone_counts (BASIC_BLOCK (i), for_clone);
+	ipa_tm_decrement_clone_counts (BASIC_BLOCK_FOR_FN (cfun, i),
+				       for_clone);
 
       if (old_irr)
 	{
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 2d7916b..a706730 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -672,7 +672,8 @@ make_edges (void)
 
   /* Create an edge from entry to the first block with executable
      statements in it.  */
-  make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), BASIC_BLOCK (NUM_FIXED_BLOCKS),
+  make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun),
+	     BASIC_BLOCK_FOR_FN (cfun, NUM_FIXED_BLOCKS),
 	     EDGE_FALLTHRU);
 
   /* Traverse the basic block array placing edges.  */
@@ -943,7 +944,7 @@ end_recording_case_labels (void)
   edge_to_cases = NULL;
   EXECUTE_IF_SET_IN_BITMAP (touched_switch_bbs, 0, i, bi)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
       if (bb)
 	{
 	  gimple stmt = last_stmt (bb);
@@ -1027,7 +1028,8 @@ label_to_block_fn (struct function *ifun, tree dest)
      and undefined variable warnings quite right.  */
   if (seen_error () && uid < 0)
     {
-      gimple_stmt_iterator gsi = gsi_start_bb (BASIC_BLOCK (NUM_FIXED_BLOCKS));
+      gimple_stmt_iterator gsi =
+	gsi_start_bb (BASIC_BLOCK_FOR_FN (cfun, NUM_FIXED_BLOCKS));
       gimple stmt;
 
       stmt = gimple_build_label (dest);
@@ -2082,8 +2084,8 @@ gimple_debug_bb (basic_block bb)
 basic_block
 gimple_debug_bb_n (int n)
 {
-  gimple_debug_bb (BASIC_BLOCK (n));
-  return BASIC_BLOCK (n);
+  gimple_debug_bb (BASIC_BLOCK_FOR_FN (cfun, n));
+  return BASIC_BLOCK_FOR_FN (cfun, n);
 }
 
 
@@ -7476,7 +7478,7 @@ gimple_flow_call_edges_add (sbitmap blocks)
      return or not...  */
   for (i = 0; i < last_bb; i++)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
       gimple_stmt_iterator gsi;
       gimple stmt, last_stmt;
 
@@ -7605,7 +7607,7 @@ remove_edge_and_dominated_blocks (edge e)
 
       EXECUTE_IF_SET_IN_BITMAP (df, 0, i, bi)
 	{
-	  bb = BASIC_BLOCK (i);
+	  bb = BASIC_BLOCK_FOR_FN (cfun, i);
 	  bitmap_set_bit (df_idom,
 			  get_immediate_dominator (CDI_DOMINATORS, bb)->index);
 	}
@@ -7643,7 +7645,7 @@ remove_edge_and_dominated_blocks (edge e)
      the dominance frontier of E.  Therefore, Y belongs to DF_IDOM.  */
   EXECUTE_IF_SET_IN_BITMAP (df_idom, 0, i, bi)
     {
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       for (dbb = first_dom_son (CDI_DOMINATORS, bb);
 	   dbb;
 	   dbb = next_dom_son (CDI_DOMINATORS, dbb))
@@ -7696,7 +7698,7 @@ gimple_purge_all_dead_eh_edges (const_bitmap blocks)
 
   EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i, bi)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 
       /* Earlier gimple_purge_dead_eh_edges could have removed
 	 this basic block already.  */
@@ -7753,7 +7755,7 @@ gimple_purge_all_dead_abnormal_call_edges (const_bitmap blocks)
 
   EXECUTE_IF_SET_IN_BITMAP (blocks, 0, i, bi)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 
       /* Earlier gimple_purge_dead_abnormal_call_edges could have removed
 	 this basic block already.  */
diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c
index ab8a394..76d9749 100644
--- a/gcc/tree-cfgcleanup.c
+++ b/gcc/tree-cfgcleanup.c
@@ -551,7 +551,7 @@ fixup_noreturn_call (gimple stmt)
 		  SET_USE (use_p, error_mark_node);
 	    }
 	  EXECUTE_IF_SET_IN_BITMAP (blocks, 0, bb_index, bi)
-	    delete_basic_block (BASIC_BLOCK (bb_index));
+	    delete_basic_block (BASIC_BLOCK_FOR_FN (cfun, bb_index));
 	  BITMAP_FREE (blocks);
 	  release_ssa_name (op);
 	}
@@ -586,7 +586,7 @@ split_bbs_on_noreturn_calls (void)
 	if (bb == NULL
 	    || bb->index < NUM_FIXED_BLOCKS
 	    || bb->index >= last_basic_block
-	    || BASIC_BLOCK (bb->index) != bb
+	    || BASIC_BLOCK_FOR_FN (cfun, bb->index) != bb
 	    || !gimple_call_noreturn_p (stmt))
 	  continue;
 
@@ -645,7 +645,7 @@ cleanup_tree_cfg_1 (void)
   n = last_basic_block;
   for (i = NUM_FIXED_BLOCKS; i < n; i++)
     {
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       if (bb)
 	retval |= cleanup_tree_cfg_bb (bb);
     }
@@ -658,7 +658,7 @@ cleanup_tree_cfg_1 (void)
       if (i < NUM_FIXED_BLOCKS)
 	continue;
 
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       if (!bb)
 	continue;
 
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index abc216d..1d1bc1e 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -2547,12 +2547,13 @@ copy_cfg_body (copy_body_data * id, gcov_type count, int frequency_scale,
   for (; last < last_basic_block; last++)
     {
       if (need_debug_cleanup)
-	maybe_move_debug_stmts_to_successors (id, BASIC_BLOCK (last));
-      BASIC_BLOCK (last)->aux = NULL;
+	maybe_move_debug_stmts_to_successors (id,
+					      BASIC_BLOCK_FOR_FN (cfun, last));
+      BASIC_BLOCK_FOR_FN (cfun, last)->aux = NULL;
       /* Update call edge destinations.  This can not be done before loop
 	 info is updated, because we may split basic blocks.  */
       if (id->transform_call_graph_edges == CB_CGE_DUPLICATE)
-	redirect_all_calls (id, BASIC_BLOCK (last));
+	redirect_all_calls (id, BASIC_BLOCK_FOR_FN (cfun, last));
     }
   entry_block_map->aux = NULL;
   exit_block_map->aux = NULL;
@@ -4443,11 +4444,11 @@ static void
 fold_marked_statements (int first, struct pointer_set_t *statements)
 {
   for (; first < n_basic_blocks_for_fn (cfun); first++)
-    if (BASIC_BLOCK (first))
+    if (BASIC_BLOCK_FOR_FN (cfun, first))
       {
         gimple_stmt_iterator gsi;
 
-	for (gsi = gsi_start_bb (BASIC_BLOCK (first));
+	for (gsi = gsi_start_bb (BASIC_BLOCK_FOR_FN (cfun, first));
 	     !gsi_end_p (gsi);
 	     gsi_next (&gsi))
 	  if (pointer_set_contains (statements, gsi_stmt (gsi)))
@@ -4473,7 +4474,7 @@ fold_marked_statements (int first, struct pointer_set_t *statements)
 			  break;
 			}
 		      if (gsi_end_p (i2))
-			i2 = gsi_start_bb (BASIC_BLOCK (first));
+			i2 = gsi_start_bb (BASIC_BLOCK_FOR_FN (cfun, first));
 		      else
 			gsi_next (&i2);
 		      while (1)
@@ -4497,7 +4498,8 @@ fold_marked_statements (int first, struct pointer_set_t *statements)
 				 is mood anyway.  */
 			      if (maybe_clean_or_replace_eh_stmt (old_stmt,
 								  new_stmt))
-				gimple_purge_dead_eh_edges (BASIC_BLOCK (first));
+				gimple_purge_dead_eh_edges (
+				  BASIC_BLOCK_FOR_FN (cfun, first));
 			      break;
 			    }
 			  gsi_next (&i2);
@@ -4517,7 +4519,8 @@ fold_marked_statements (int first, struct pointer_set_t *statements)
 						       new_stmt);
 
 		  if (maybe_clean_or_replace_eh_stmt (old_stmt, new_stmt))
-		    gimple_purge_dead_eh_edges (BASIC_BLOCK (first));
+		    gimple_purge_dead_eh_edges (BASIC_BLOCK_FOR_FN (cfun,
+								    first));
 		}
 	    }
       }
diff --git a/gcc/tree-into-ssa.c b/gcc/tree-into-ssa.c
index 0067cfe..ac10440 100644
--- a/gcc/tree-into-ssa.c
+++ b/gcc/tree-into-ssa.c
@@ -558,7 +558,7 @@ set_livein_block (tree var, basic_block bb)
 
       if (def_block_index == -1
 	  || ! dominated_by_p (CDI_DOMINATORS, bb,
-	                       BASIC_BLOCK (def_block_index)))
+	                       BASIC_BLOCK_FOR_FN (cfun, def_block_index)))
 	info->need_phi_state = NEED_PHI_STATE_MAYBE;
     }
   else
@@ -821,7 +821,7 @@ prune_unused_phi_nodes (bitmap phis, bitmap kills, bitmap uses)
   adef = 1;
   EXECUTE_IF_SET_IN_BITMAP (to_remove, 0, i, bi)
     {
-      def_bb = BASIC_BLOCK (i);
+      def_bb = BASIC_BLOCK_FOR_FN (cfun, i);
       defs[adef].bb_index = i;
       defs[adef].dfs_num = bb_dom_dfs_in (CDI_DOMINATORS, def_bb);
       defs[adef + 1].bb_index = i;
@@ -895,7 +895,8 @@ prune_unused_phi_nodes (bitmap phis, bitmap kills, bitmap uses)
 	p = b;
       else
 	{
-	  use_bb = get_immediate_dominator (CDI_DOMINATORS, BASIC_BLOCK (b));
+	  use_bb = get_immediate_dominator (CDI_DOMINATORS,
+					    BASIC_BLOCK_FOR_FN (cfun, b));
 	  p = find_dfsnum_interval (defs, n_defs,
 				    bb_dom_dfs_in (CDI_DOMINATORS, use_bb));
 	  if (!bitmap_bit_p (phis, p))
@@ -907,7 +908,7 @@ prune_unused_phi_nodes (bitmap phis, bitmap kills, bitmap uses)
 	continue;
 
       /* Add the new uses to the worklist.  */
-      def_bb = BASIC_BLOCK (p);
+      def_bb = BASIC_BLOCK_FOR_FN (cfun, p);
       FOR_EACH_EDGE (e, ei, def_bb->preds)
 	{
 	  u = e->src->index;
@@ -1004,7 +1005,7 @@ insert_phi_nodes_for (tree var, bitmap phi_insertion_points, bool update_p)
   /* And insert the PHI nodes.  */
   EXECUTE_IF_SET_IN_BITMAP (phi_insertion_points, 0, bb_index, bi)
     {
-      bb = BASIC_BLOCK (bb_index);
+      bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
       if (update_p)
 	mark_block_for_update (bb);
 
@@ -3021,8 +3022,9 @@ insert_updated_phi_nodes_for (tree var, bitmap_head *dfs, bitmap blocks,
 						    db->def_blocks);
 	  if (entry != ENTRY_BLOCK_PTR_FOR_FN (cfun))
 	    EXECUTE_IF_SET_IN_BITMAP (idf, 0, i, bi)
-	      if (BASIC_BLOCK (i) != entry
-		  && dominated_by_p (CDI_DOMINATORS, BASIC_BLOCK (i), entry))
+	      if (BASIC_BLOCK_FOR_FN (cfun, i) != entry
+		  && dominated_by_p (CDI_DOMINATORS,
+				     BASIC_BLOCK_FOR_FN (cfun, i), entry))
 		bitmap_set_bit (pruned_idf, i);
 	}
       else
@@ -3054,7 +3056,7 @@ insert_updated_phi_nodes_for (tree var, bitmap_head *dfs, bitmap blocks,
 	{
 	  edge e;
 	  edge_iterator ei;
-	  basic_block bb = BASIC_BLOCK (i);
+	  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 
 	  FOR_EACH_EDGE (e, ei, bb->preds)
 	    if (e->src->index >= 0)
diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c
index 82005af..ebdf511 100644
--- a/gcc/tree-ssa-dom.c
+++ b/gcc/tree-ssa-dom.c
@@ -902,7 +902,7 @@ tree_ssa_dominator_optimize (void)
 	 iterator.  */
       EXECUTE_IF_SET_IN_BITMAP (need_eh_cleanup, 0, i, bi)
 	{
-	  basic_block bb = BASIC_BLOCK (i);
+	  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 	  if (bb == NULL)
 	    continue;
 	  while (single_succ_p (bb)
diff --git a/gcc/tree-ssa-live.c b/gcc/tree-ssa-live.c
index 8ad5d9a..5d1a3b9 100644
--- a/gcc/tree-ssa-live.c
+++ b/gcc/tree-ssa-live.c
@@ -1057,7 +1057,7 @@ live_worklist (tree_live_info_p live)
   while (live->stack_top != live->work_stack)
     {
       b = *--(live->stack_top);
-      loe_visit_block (live, BASIC_BLOCK (b), visited, tmp);
+      loe_visit_block (live, BASIC_BLOCK_FOR_FN (cfun, b), visited, tmp);
     }
 
   BITMAP_FREE (tmp);
diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
index e1d55ff..de667ad 100644
--- a/gcc/tree-ssa-loop-manip.c
+++ b/gcc/tree-ssa-loop-manip.c
@@ -202,7 +202,7 @@ compute_live_loop_exits (bitmap live_exits, bitmap use_blocks,
 
   EXECUTE_IF_SET_IN_BITMAP (use_blocks, 0, i, bi)
     {
-      basic_block use_bb = BASIC_BLOCK (i);
+      basic_block use_bb = BASIC_BLOCK_FOR_FN (cfun, i);
       struct loop *use_loop = use_bb->loop_father;
       gcc_checking_assert (def_loop != use_loop
 			   && ! flow_loop_nested_p (def_loop, use_loop));
@@ -325,7 +325,7 @@ add_exit_phis_var (tree var, bitmap use_blocks, bitmap *loop_exits)
 
   EXECUTE_IF_SET_IN_BITMAP (live_exits, 0, index, bi)
     {
-      add_exit_phi (BASIC_BLOCK (index), var);
+      add_exit_phi (BASIC_BLOCK_FOR_FN (cfun, index), var);
     }
 
   BITMAP_FREE (live_exits);
@@ -461,7 +461,7 @@ find_uses_to_rename (bitmap changed_bbs, bitmap *use_blocks, bitmap need_phis)
 
   if (changed_bbs)
     EXECUTE_IF_SET_IN_BITMAP (changed_bbs, 0, index, bi)
-      find_uses_to_rename_bb (BASIC_BLOCK (index), use_blocks, need_phis);
+      find_uses_to_rename_bb (BASIC_BLOCK_FOR_FN (cfun, index), use_blocks, need_phis);
   else
     FOR_EACH_BB (bb)
       find_uses_to_rename_bb (bb, use_blocks, need_phis);
@@ -729,13 +729,13 @@ copy_phi_node_args (unsigned first_new_block)
   unsigned i;
 
   for (i = first_new_block; i < (unsigned) last_basic_block; i++)
-    BASIC_BLOCK (i)->flags |= BB_DUPLICATED;
+    BASIC_BLOCK_FOR_FN (cfun, i)->flags |= BB_DUPLICATED;
 
   for (i = first_new_block; i < (unsigned) last_basic_block; i++)
-    add_phi_args_after_copy_bb (BASIC_BLOCK (i));
+    add_phi_args_after_copy_bb (BASIC_BLOCK_FOR_FN (cfun, i));
 
   for (i = first_new_block; i < (unsigned) last_basic_block; i++)
-    BASIC_BLOCK (i)->flags &= ~BB_DUPLICATED;
+    BASIC_BLOCK_FOR_FN (cfun, i)->flags &= ~BB_DUPLICATED;
 }
 
 
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index f9ac337..dcce38a 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -2487,7 +2487,7 @@ compute_antic (void)
 	{
 	  if (bitmap_bit_p (changed_blocks, postorder[i]))
 	    {
-	      basic_block block = BASIC_BLOCK (postorder[i]);
+	      basic_block block = BASIC_BLOCK_FOR_FN (cfun, postorder[i]);
 	      changed |= compute_antic_aux (block,
 					    bitmap_bit_p (has_abnormal_preds,
 						      block->index));
@@ -2516,7 +2516,7 @@ compute_antic (void)
 	    {
 	      if (bitmap_bit_p (changed_blocks, postorder[i]))
 		{
-		  basic_block block = BASIC_BLOCK (postorder[i]);
+		  basic_block block = BASIC_BLOCK_FOR_FN (cfun, postorder[i]);
 		  changed
 		    |= compute_partial_antic_aux (block,
 						  bitmap_bit_p (has_abnormal_preds,
diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c
index 7145559..9108983 100644
--- a/gcc/tree-ssa-reassoc.c
+++ b/gcc/tree-ssa-reassoc.c
@@ -2028,7 +2028,8 @@ update_range_test (struct range_entry *range, struct range_entry *otherrange,
 {
   operand_entry_t oe = (*ops)[range->idx];
   tree op = oe->op;
-  gimple stmt = op ? SSA_NAME_DEF_STMT (op) : last_stmt (BASIC_BLOCK (oe->id));
+  gimple stmt = op ? SSA_NAME_DEF_STMT (op) :
+    last_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id));
   location_t loc = gimple_location (stmt);
   tree optype = op ? TREE_TYPE (op) : boolean_type_node;
   tree tem = build_range_check (loc, optype, exp, in_p, low, high);
@@ -2281,7 +2282,8 @@ optimize_range_tests (enum tree_code opcode,
       oe = (*ops)[i];
       ranges[i].idx = i;
       init_range_entry (ranges + i, oe->op,
-			oe->op ? NULL : last_stmt (BASIC_BLOCK (oe->id)));
+			oe->op ? NULL :
+			  last_stmt (BASIC_BLOCK_FOR_FN (cfun, oe->id)));
       /* For | invert it now, we will invert it again before emitting
 	 the optimized expression.  */
       if (opcode == BIT_IOR_EXPR
diff --git a/gcc/tree-ssa-sink.c b/gcc/tree-ssa-sink.c
index 947a58a..ecc1f6b 100644
--- a/gcc/tree-ssa-sink.c
+++ b/gcc/tree-ssa-sink.c
@@ -182,10 +182,10 @@ nearest_common_dominator_of_uses (gimple stmt, bool *debug_stmts)
 	  bitmap_set_bit (blocks, useblock->index);
 	}
     }
-  commondom = BASIC_BLOCK (bitmap_first_set_bit (blocks));
+  commondom = BASIC_BLOCK_FOR_FN (cfun, bitmap_first_set_bit (blocks));
   EXECUTE_IF_SET_IN_BITMAP (blocks, 0, j, bi)
     commondom = nearest_common_dominator (CDI_DOMINATORS, commondom,
-					  BASIC_BLOCK (j));
+					  BASIC_BLOCK_FOR_FN (cfun, j));
   BITMAP_FREE (blocks);
   return commondom;
 }
diff --git a/gcc/tree-ssa-tail-merge.c b/gcc/tree-ssa-tail-merge.c
index d722a9b..fbcbf78 100644
--- a/gcc/tree-ssa-tail-merge.c
+++ b/gcc/tree-ssa-tail-merge.c
@@ -454,7 +454,7 @@ same_succ_hash (const_same_succ e)
   int flags;
   unsigned int i;
   unsigned int first = bitmap_first_set_bit (e->bbs);
-  basic_block bb = BASIC_BLOCK (first);
+  basic_block bb = BASIC_BLOCK_FOR_FN (cfun, first);
   int size = 0;
   gimple_stmt_iterator gsi;
   gimple stmt;
@@ -502,8 +502,8 @@ same_succ_hash (const_same_succ e)
 
   EXECUTE_IF_SET_IN_BITMAP (e->succs, 0, s, bs)
     {
-      int n = find_edge (bb, BASIC_BLOCK (s))->dest_idx;
-      for (gsi = gsi_start_phis (BASIC_BLOCK (s)); !gsi_end_p (gsi);
+      int n = find_edge (bb, BASIC_BLOCK_FOR_FN (cfun, s))->dest_idx;
+      for (gsi = gsi_start_phis (BASIC_BLOCK_FOR_FN (cfun, s)); !gsi_end_p (gsi);
 	   gsi_next (&gsi))
 	{
 	  gimple phi = gsi_stmt (gsi);
@@ -572,8 +572,8 @@ same_succ_def::equal (const value_type *e1, const compare_type *e2)
   first1 = bitmap_first_set_bit (e1->bbs);
   first2 = bitmap_first_set_bit (e2->bbs);
 
-  bb1 = BASIC_BLOCK (first1);
-  bb2 = BASIC_BLOCK (first2);
+  bb1 = BASIC_BLOCK_FOR_FN (cfun, first1);
+  bb2 = BASIC_BLOCK_FOR_FN (cfun, first2);
 
   if (BB_SIZE (bb1) != BB_SIZE (bb2))
     return 0;
@@ -834,7 +834,7 @@ same_succ_flush_bbs (bitmap bbs)
   bitmap_iterator bi;
 
   EXECUTE_IF_SET_IN_BITMAP (bbs, 0, i, bi)
-    same_succ_flush_bb (BASIC_BLOCK (i));
+    same_succ_flush_bb (BASIC_BLOCK_FOR_FN (cfun, i));
 }
 
 /* Release the last vdef in BB, either normal or phi result.  */
@@ -887,7 +887,7 @@ update_worklist (void)
   same = same_succ_alloc ();
   EXECUTE_IF_SET_IN_BITMAP (deleted_bb_preds, 0, i, bi)
     {
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       gcc_assert (bb != NULL);
       find_same_succ_bb (bb, &same);
       if (same == NULL)
@@ -1075,7 +1075,7 @@ set_cluster (basic_block bb1, basic_block bb2)
       merge = BB_CLUSTER (bb1);
       merge_clusters (merge, old);
       EXECUTE_IF_SET_IN_BITMAP (old->bbs, 0, i, bi)
-	BB_CLUSTER (BASIC_BLOCK (i)) = merge;
+	BB_CLUSTER (BASIC_BLOCK_FOR_FN (cfun, i)) = merge;
       all_clusters[old->index] = NULL;
       update_rep_bb (merge, old->rep_bb);
       delete_cluster (old);
@@ -1320,7 +1320,7 @@ same_phi_alternatives (same_succ same_succ, basic_block bb1, basic_block bb2)
 
   EXECUTE_IF_SET_IN_BITMAP (same_succ->succs, 0, s, bs)
     {
-      succ = BASIC_BLOCK (s);
+      succ = BASIC_BLOCK_FOR_FN (cfun, s);
       e1 = find_edge (bb1, succ);
       e2 = find_edge (bb2, succ);
       if (e1->flags & EDGE_COMPLEX
@@ -1406,7 +1406,7 @@ find_clusters_1 (same_succ same_succ)
 
   EXECUTE_IF_SET_IN_BITMAP (same_succ->bbs, 0, i, bi)
     {
-      bb1 = BASIC_BLOCK (i);
+      bb1 = BASIC_BLOCK_FOR_FN (cfun, i);
 
       /* TODO: handle blocks with phi-nodes.  We'll have to find corresponding
 	 phi-nodes in bb1 and bb2, with the same alternatives for the same
@@ -1417,7 +1417,7 @@ find_clusters_1 (same_succ same_succ)
       nr_comparisons = 0;
       EXECUTE_IF_SET_IN_BITMAP (same_succ->bbs, i + 1, j, bj)
 	{
-	  bb2 = BASIC_BLOCK (j);
+	  bb2 = BASIC_BLOCK_FOR_FN (cfun, j);
 
 	  if (bb_has_non_vop_phi (bb2))
 	    continue;
@@ -1573,7 +1573,7 @@ apply_clusters (void)
       bitmap_clear_bit (c->bbs, bb2->index);
       EXECUTE_IF_SET_IN_BITMAP (c->bbs, 0, j, bj)
 	{
-	  bb1 = BASIC_BLOCK (j);
+	  bb1 = BASIC_BLOCK_FOR_FN (cfun, j);
 	  bitmap_clear_bit (update_bbs, bb1->index);
 
 	  replace_block_by (bb1, bb2);
@@ -1633,7 +1633,7 @@ update_debug_stmts (void)
       gimple stmt;
       gimple_stmt_iterator gsi;
 
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
 	{
 	  stmt = gsi_stmt (gsi);
diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
index ad727a1..9289c11 100644
--- a/gcc/tree-ssa-threadupdate.c
+++ b/gcc/tree-ssa-threadupdate.c
@@ -1412,7 +1412,7 @@ mark_threaded_blocks (bitmap threaded_blocks)
     {
       EXECUTE_IF_SET_IN_BITMAP (tmp, 0, i, bi)
 	{
-	  bb = BASIC_BLOCK (i);
+	  bb = BASIC_BLOCK_FOR_FN (cfun, i);
 	  if (EDGE_COUNT (bb->preds) > 1
 	      && !redirection_block_p (bb))
 	    {
@@ -1442,7 +1442,7 @@ mark_threaded_blocks (bitmap threaded_blocks)
      by trimming off the end of the jump thread path.  */
   EXECUTE_IF_SET_IN_BITMAP (tmp, 0, i, bi)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
       FOR_EACH_EDGE (e, ei, bb->preds)
 	{
 	  if (e->aux)
@@ -1512,7 +1512,7 @@ mark_threaded_blocks (bitmap threaded_blocks)
      we have to iterate on those rather than the threaded_edges vector.  */
   EXECUTE_IF_SET_IN_BITMAP (tmp, 0, i, bi)
     {
-      bb = BASIC_BLOCK (i);
+      bb = BASIC_BLOCK_FOR_FN (cfun, i);
       FOR_EACH_EDGE (e, ei, bb->preds)
 	{
 	  if (e->aux)
@@ -1592,7 +1592,7 @@ thread_through_all_blocks (bool may_peel_loop_headers)
      loop structure.  */
   EXECUTE_IF_SET_IN_BITMAP (threaded_blocks, 0, i, bi)
     {
-      basic_block bb = BASIC_BLOCK (i);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 
       if (EDGE_COUNT (bb->preds) > 0)
 	retval |= thread_block (bb, true);
diff --git a/gcc/tree-ssa-uncprop.c b/gcc/tree-ssa-uncprop.c
index 44194b8..92652de 100644
--- a/gcc/tree-ssa-uncprop.c
+++ b/gcc/tree-ssa-uncprop.c
@@ -214,7 +214,8 @@ associate_equivalences_with_edges (void)
 		      equivalency = XNEW (struct edge_equivalency);
 		      equivalency->rhs = x;
 		      equivalency->lhs = cond;
-		      find_edge (bb, BASIC_BLOCK (i))->aux = equivalency;
+		      find_edge (bb, BASIC_BLOCK_FOR_FN (cfun, i))->aux =
+			equivalency;
 		    }
 		}
 	      free (info);
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index d9da996..785e72f 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -5975,7 +5975,7 @@ find_assert_locations (void)
   need_asserts = false;
   for (i = rpo_cnt - 1; i >= 0; --i)
     {
-      basic_block bb = BASIC_BLOCK (rpo[i]);
+      basic_block bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]);
       edge e;
       edge_iterator ei;
 
-- 
1.7.11.7

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h
  2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
                                       ` (12 preceding siblings ...)
  2013-12-06 15:12                     ` [PATCH 06/13] Eliminate BASIC_BLOCK macro David Malcolm
@ 2013-12-06 15:39                     ` Richard Biener
  2013-12-09 22:07                       ` David Malcolm
  13 siblings, 1 reply; 42+ messages in thread
From: Richard Biener @ 2013-12-06 15:39 UTC (permalink / raw)
  To: David Malcolm, Richard Biener; +Cc: gcc-patches

David Malcolm <dmalcolm@redhat.com> wrote:
>I have a series of 13 follow-up patches which remove the remaining
>"cfun"-using macros from basic-block.h
>
>Successfully bootstrapped&regtested on x86_64-unknown-linux-gnu.
>
>These were pre-approved in stage1, and are mechanical in nature [1]
>
>I'd like to apply these to trunk now, but given that we're now in
>stage3, do I need to wait until the next stage1?

No, its ok now.

>The first 4 patches rename various "_for_function|_FOR_FUNCTION"
>macros to "_for_fn|_FOR_FN" for consistency with the earlier
>patches in this thread.
>
>The remaining patches eliminate cfun-using macros in favor of
>the "_for_fn|_FOR_FN" variant, making uses of cfun explicit.
>There are still some macros in function.h that implicitly use
>cfun, but it's less clear what to replace them with.
>
>Note to self: here's a grep invocation for ensuring that no new
>uses sneak into the sources:
>for m in \
>  basic_block_info_for_function BASIC_BLOCK_FOR_FUNCTION \
>  SET_BASIC_BLOCK_FOR_FUNCTION last_basic_block_for_function \
>  label_to_block_map_for_function profile_status_for_function \
>  SET_BASIC_BLOCK BASIC_BLOCK basic_block_info label_to_block_map \
>  profile_status last_basic_block FOR_EACH_BB FOR_EACH_BB_REVERSE \
>  FOR_ALL_BB ; 
>do
>  grep -nH -E -w $m \
>     gcc/*.[ch] gcc/config/*.[ch] gcc/config/*/*.{c,h,md} ; 
>done
>
>(this currently has 11 false-positives)

After the patches the macros should be removed so that no new uses appear.

Thanks, Richard.

>[1] with one exception, in patch 10 in gcc/ira-emit.c (ira_emit) where
>I introduced a new local to avoid overlong lines.
>
>David Malcolm (13):
>  Rename macros (basic_block_info_for_function,
>    BASIC_BLOCK_FOR_FUNCTION, SET_BASIC_BLOCK_FOR_FUNCTION)
>  Rename last_basic_block_for_function to last_basic_block_for_fn.
>  Rename label_to_block_map_for_function to label_to_block_map_for_fn.
>  Rename profile_status_for_function to profile_status_for_fn.
>  Eliminate SET_BASIC_BLOCK macro.
>  Eliminate BASIC_BLOCK macro.
>  Eliminate basic_block_info macro.
>  Eliminate label_to_block_map macro.
>  Eliminate profile_status macro.
>  Eliminate last_basic_block macro.
>  Eliminate FOR_EACH_BB macro.
>  Eliminate FOR_EACH_BB_REVERSE macro.
>  Eliminate FOR_ALL_BB macro.
>
> gcc/alias.c                              |   2 +-
> gcc/asan.c                               |   6 +-
> gcc/auto-inc-dec.c                       |   2 +-
> gcc/basic-block.h                        |  32 +++------
> gcc/bb-reorder.c                         |  29 ++++----
> gcc/bt-load.c                            |  45 ++++++------
> gcc/caller-save.c                        |   8 +--
> gcc/cfg.c                                |  32 ++++-----
> gcc/cfganal.c                            |  35 +++++-----
> gcc/cfgbuild.c                           |  12 ++--
> gcc/cfgcleanup.c                         |   6 +-
> gcc/cfgexpand.c                          |  14 ++--
> gcc/cfghooks.c                           |  16 ++---
> gcc/cfgloop.c                            |  20 +++---
> gcc/cfgloopanal.c                        |   8 +--
> gcc/cfgloopmanip.c                       |   6 +-
> gcc/cfgrtl.c                             |  61 ++++++++--------
> gcc/cgraphbuild.c                        |   8 +--
> gcc/combine-stack-adj.c                  |   2 +-
> gcc/combine.c                            |   8 +--
> gcc/config/arm/arm.c                     |   4 +-
> gcc/config/bfin/bfin.c                   |   4 +-
> gcc/config/c6x/c6x.c                     |   6 +-
> gcc/config/epiphany/resolve-sw-modes.c   |   6 +-
> gcc/config/frv/frv.c                     |   8 +--
> gcc/config/i386/i386.c                   |   2 +-
> gcc/config/ia64/ia64.c                   |   6 +-
> gcc/config/mips/mips.c                   |   8 +--
> gcc/config/picochip/picochip.c           |   2 +-
> gcc/config/rs6000/rs6000.c               |   2 +-
> gcc/config/s390/s390.c                   |   4 +-
> gcc/config/sh/sh.c                       |   2 +-
> gcc/config/spu/spu.c                     |   6 +-
> gcc/config/tilegx/tilegx.c               |   4 +-
> gcc/config/tilepro/tilepro.c             |   4 +-
> gcc/coverage.c                           |   2 +-
> gcc/cprop.c                              |  23 ++++---
> gcc/cse.c                                |   8 +--
> gcc/dce.c                                |  10 +--
> gcc/df-core.c                            |  68 +++++++++---------
> gcc/df-problems.c                        |  54 +++++++--------
> gcc/df-scan.c                            |  42 ++++++-----
> gcc/df.h                                 |   2 +-
> gcc/dominance.c                          |  37 +++++-----
> gcc/domwalk.c                            |   2 +-
> gcc/dse.c                                |  14 ++--
> gcc/except.c                             |   2 +-
> gcc/final.c                              |   6 +-
> gcc/function.c                           |  16 ++---
> gcc/gcse.c                               |  54 ++++++++-------
> gcc/gimple-iterator.c                    |   2 +-
> gcc/gimple-ssa-isolate-paths.c           |   4 +-
> gcc/gimple-streamer-in.c                 |   4 +-
> gcc/gimple.c                             |   8 ++-
> gcc/graph.c                              |   6 +-
> gcc/graphite-scop-detection.c            |   6 +-
> gcc/graphite-sese-to-poly.c              |   6 +-
> gcc/graphite.c                           |   6 +-
> gcc/haifa-sched.c                        |   4 +-
> gcc/hw-doloop.c                          |   6 +-
> gcc/ifcvt.c                              |   2 +-
> gcc/init-regs.c                          |   2 +-
> gcc/internal-fn.c                        |   6 +-
> gcc/ipa-inline-analysis.c                |   4 +-
> gcc/ipa-prop.c                           |   2 +-
> gcc/ipa-pure-const.c                     |   2 +-
> gcc/ipa-split.c                          |  13 ++--
> gcc/ipa-utils.c                          |   8 +--
> gcc/ira-build.c                          |  15 ++--
> gcc/ira-costs.c                          |   2 +-
> gcc/ira-emit.c                           |  24 ++++---
> gcc/ira.c                                |  42 ++++++-----
> gcc/jump.c                               |   2 +-
>gcc/lcm.c                                | 115
>++++++++++++++++++-------------
> gcc/loop-init.c                          |   6 +-
> gcc/loop-invariant.c                     |   2 +-
> gcc/loop-unroll.c                        |  16 +++--
> gcc/lower-subreg.c                       |   8 +--
> gcc/lra-assigns.c                        |   2 +-
> gcc/lra-coalesce.c                       |   4 +-
> gcc/lra-constraints.c                    |   4 +-
> gcc/lra-eliminations.c                   |   2 +-
> gcc/lra-lives.c                          |   4 +-
> gcc/lra-spills.c                         |   6 +-
> gcc/lra.c                                |  10 +--
> gcc/lto-streamer-in.c                    |  28 ++++----
> gcc/lto-streamer-out.c                   |   8 +--
> gcc/mcf.c                                |   4 +-
> gcc/mode-switching.c                     |  27 ++++----
> gcc/modulo-sched.c                       |   2 +-
> gcc/omp-low.c                            |   6 +-
> gcc/optabs.c                             |   2 +-
> gcc/postreload-gcse.c                    |   4 +-
> gcc/postreload.c                         |   4 +-
> gcc/predict.c                            |  54 +++++++--------
> gcc/profile.c                            |  12 ++--
> gcc/recog.c                              |   6 +-
> gcc/ree.c                                |   2 +-
> gcc/reg-stack.c                          |   6 +-
> gcc/regcprop.c                           |   8 +--
> gcc/reginfo.c                            |   2 +-
> gcc/regrename.c                          |  12 ++--
> gcc/regstat.c                            |   8 +--
> gcc/reload1.c                            |  10 +--
> gcc/resource.c                           |  13 ++--
> gcc/sched-ebb.c                          |   4 +-
> gcc/sched-int.h                          |   5 +-
>gcc/sched-rgn.c                          | 103
>+++++++++++++++------------
> gcc/sched-vis.c                          |   2 +-
> gcc/sel-sched-dump.c                     |   2 +-
> gcc/sel-sched-ir.c                       |  35 +++++-----
> gcc/sel-sched.c                          |  22 +++---
> gcc/sese.c                               |   6 +-
> gcc/stack-ptr-mod.c                      |   2 +-
> gcc/store-motion.c                       |  38 +++++-----
> gcc/testsuite/g++.dg/plugin/selfassign.c |   2 +-
> gcc/testsuite/gcc.dg/plugin/selfassign.c |   2 +-
> gcc/tracer.c                             |   8 +--
> gcc/trans-mem.c                          |  15 ++--
> gcc/tree-call-cdce.c                     |   2 +-
>gcc/tree-cfg.c                           | 108
>+++++++++++++++--------------
> gcc/tree-cfgcleanup.c                    |  16 ++---
> gcc/tree-complex.c                       |   6 +-
> gcc/tree-dfa.c                           |   6 +-
> gcc/tree-eh.c                            |   6 +-
> gcc/tree-emutls.c                        |   2 +-
> gcc/tree-if-conv.c                       |   2 +-
> gcc/tree-inline.c                        |  32 +++++----
> gcc/tree-into-ssa.c                      |  45 ++++++------
> gcc/tree-loop-distribution.c             |   2 +-
> gcc/tree-nrv.c                           |   6 +-
> gcc/tree-object-size.c                   |   2 +-
> gcc/tree-outof-ssa.c                     |   6 +-
> gcc/tree-profile.c                       |   2 +-
> gcc/tree-scalar-evolution.c              |   2 +-
> gcc/tree-sra.c                           |  14 ++--
> gcc/tree-ssa-ccp.c                       |   6 +-
> gcc/tree-ssa-coalesce.c                  |   6 +-
> gcc/tree-ssa-copy.c                      |   2 +-
> gcc/tree-ssa-copyrename.c                |   4 +-
> gcc/tree-ssa-dce.c                       |  13 ++--
> gcc/tree-ssa-dom.c                       |   8 +--
> gcc/tree-ssa-forwprop.c                  |   2 +-
> gcc/tree-ssa-live.c                      |  32 ++++-----
> gcc/tree-ssa-loop-im.c                   |   8 +--
> gcc/tree-ssa-loop-manip.c                |  24 +++----
> gcc/tree-ssa-math-opts.c                 |  10 +--
> gcc/tree-ssa-pre.c                       |  16 ++---
> gcc/tree-ssa-propagate.c                 |   8 +--
> gcc/tree-ssa-reassoc.c                   |   8 ++-
> gcc/tree-ssa-sccvn.c                     |   2 +-
> gcc/tree-ssa-sink.c                      |   4 +-
> gcc/tree-ssa-structalias.c               |   4 +-
> gcc/tree-ssa-tail-merge.c                |  32 ++++-----
> gcc/tree-ssa-ter.c                       |   2 +-
> gcc/tree-ssa-threadupdate.c              |  10 +--
> gcc/tree-ssa-uncprop.c                   |   9 +--
> gcc/tree-ssa-uninit.c                    |   4 +-
> gcc/tree-ssa.c                           |   6 +-
> gcc/tree-stdarg.c                        |   8 +--
> gcc/tree-switch-conversion.c             |   2 +-
> gcc/tree-vect-generic.c                  |   2 +-
> gcc/tree-vectorizer.c                    |   6 +-
> gcc/tree-vrp.c                           |  20 +++---
> gcc/tsan.c                               |   2 +-
> gcc/ubsan.c                              |   2 +-
> gcc/value-prof.c                         |   6 +-
> gcc/var-tracking.c                       |  28 ++++----
> gcc/vtable-verify.c                      |   2 +-
> gcc/web.c                                |   6 +-
> 170 files changed, 1112 insertions(+), 1030 deletions(-)


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] Eliminate last_basic_block macro.
  2013-12-06 15:09                     ` [PATCH 10/13] Eliminate last_basic_block macro David Malcolm
@ 2013-12-06 15:58                       ` Steven Bosscher
  2013-12-06 18:57                         ` Oleg Endo
  0 siblings, 1 reply; 42+ messages in thread
From: Steven Bosscher @ 2013-12-06 15:58 UTC (permalink / raw)
  To: David Malcolm; +Cc: Richard Biener, gcc-patches

On Fri, Dec 6, 2013 at 3:51 PM, David Malcolm wrote:
>         * asan.c (transform_statements): Eliminate use of last_basic_block
>         in favor of last_basic_block_for_fn, in order to make use of cfun
>         explicit.

Can we please make all this _for_fn go away?

Ciao!
Steven

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] Eliminate last_basic_block macro.
  2013-12-06 15:58                       ` Steven Bosscher
@ 2013-12-06 18:57                         ` Oleg Endo
  2013-12-06 20:25                           ` Richard Biener
  0 siblings, 1 reply; 42+ messages in thread
From: Oleg Endo @ 2013-12-06 18:57 UTC (permalink / raw)
  To: Steven Bosscher; +Cc: David Malcolm, Richard Biener, gcc-patches

On Fri, 2013-12-06 at 16:57 +0100, Steven Bosscher wrote:
> On Fri, Dec 6, 2013 at 3:51 PM, David Malcolm wrote:
> >         * asan.c (transform_statements): Eliminate use of last_basic_block
> >         in favor of last_basic_block_for_fn, in order to make use of cfun
> >         explicit.
> 
> Can we please make all this _for_fn go away?
> 

Sorry if this has been discussed before... but why not adding member
functions to 'function' instead of freestanding macros/functions that
take a function* as a first argument?  This would also make it easier to
eliminate the "_for_fn" (freestanding function/macro name clashes etc) I
think.

Cheers,
Oleg

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] Eliminate last_basic_block macro.
  2013-12-06 18:57                         ` Oleg Endo
@ 2013-12-06 20:25                           ` Richard Biener
  2013-12-09 21:48                             ` David Malcolm
  0 siblings, 1 reply; 42+ messages in thread
From: Richard Biener @ 2013-12-06 20:25 UTC (permalink / raw)
  To: Oleg Endo, Steven Bosscher; +Cc: David Malcolm, gcc-patches

Oleg Endo <oleg.endo@t-online.de> wrote:
>On Fri, 2013-12-06 at 16:57 +0100, Steven Bosscher wrote:
>> On Fri, Dec 6, 2013 at 3:51 PM, David Malcolm wrote:
>> >         * asan.c (transform_statements): Eliminate use of
>last_basic_block
>> >         in favor of last_basic_block_for_fn, in order to make use
>of cfun
>> >         explicit.
>> 
>> Can we please make all this _for_fn go away?
>> 
>
>Sorry if this has been discussed before... but why not adding member
>functions to 'function' instead of freestanding macros/functions that
>take a function* as a first argument?  This would also make it easier
>to
>eliminate the "_for_fn" (freestanding function/macro name clashes etc)
>I
>think.

Both can be done, but these patches make cfun uses explicit which was the goal while following existing practice.

Richard.

>Cheers,
>Oleg


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 11/13] Eliminate FOR_EACH_BB macro.
  2013-12-06 15:08                     ` [PATCH 11/13] Eliminate FOR_EACH_BB macro David Malcolm
@ 2013-12-07  7:13                       ` Oleg Endo
  0 siblings, 0 replies; 42+ messages in thread
From: Oleg Endo @ 2013-12-07  7:13 UTC (permalink / raw)
  To: David Malcolm; +Cc: Richard Biener, gcc-patches

David,

Could you please also update the use of FOR_EACH_BB in
config/sh/sh_treg_combine.cc ?

Thanks,
Oleg

On Fri, 2013-12-06 at 09:51 -0500, David Malcolm wrote:
> gcc/
> 	* basic-block.h (FOR_EACH_BB): Eliminate macro.
> 
> 	* asan.c (transform_statements, execute_sanopt): Eliminate
> 	use of FOR_EACH_BB in favor of FOR_EACH_BB_FN, to make use of cfun
> 	explicit.
> 	* auto-inc-dec.c (rest_of_handle_auto_inc_dec): Likewise.
> 	* bb-reorder.c (find_rarely_executed_basic_blocks_and_crossing_edges,
> 	set_edge_can_fallthru_flag, fix_up_fall_thru_edges,
> 	fix_crossing_unconditional_branches, add_reg_crossing_jump_notes,
> 	insert_section_boundary_note, rest_of_handle_reorder_blocks,
> 	duplicate_computed_gotos): Likewise.
> 	* cfg.c (clear_edges, compact_blocks, brief_dump_cfg): Likewise.
> 	* cfganal.c (find_unreachable_blocks, add_noreturn_fake_exit_edges,
> 	compute_dominance_frontiers_1, single_pred_before_succ_order): Likewise.
> 	* cfgbuild.c (find_many_sub_basic_blocks): Likewise.
> 	* cfgcleanup.c (try_optimize_cfg, delete_dead_jumptables): Likewise.
> 	* cfgexpand.c (add_scope_conflicts, discover_nonconstant_array_refs):
> 	Likewise.
> 	* cfgloop.c (flow_loops_cfg_dump, get_loop_body, record_loop_exits,
> 	verify_loop_structure): Likewise.
> 	* cfgloopanal.c (mark_loop_exit_edges): Likewise.
> 	* cfgrtl.c (compute_bb_for_insn, find_partition_fixes,
> 	verify_hot_cold_block_grouping, purge_all_dead_edges,
> 	fixup_abnormal_edges, record_effective_endpoints,
> 	outof_cfg_layout_mode, fixup_reorder_chain, force_one_exit_fallthru,
> 	break_superblocks): Likewise.
> 	* cgraphbuild.c (build_cgraph_edges, rebuild_cgraph_edges,
> 	cgraph_rebuild_references): Likewise.
> 	* combine-stack-adj.c (combine_stack_adjustments): Likewise.
> 	* combine.c (delete_noop_moves, create_log_links,
> 	combine_instructions): Likewise.
> 	* config/arm/arm.c (thumb1_reorg, thumb2_reorg): Likewise.
> 	* config/bfin/bfin.c (bfin_gen_bundles, reorder_var_tracking_notes):
> 	Likewise.
> 	* config/c6x/c6x.c (c6x_gen_bundles, conditionalize_after_sched,
> 	c6x_reorg): Likewise.
> 	* config/epiphany/resolve-sw-modes.c (resolve_sw_modes): Likewise.
> 	* config/frv/frv.c (frv_optimize_membar): Likewise.
> 	* config/i386/i386.c (ix86_finalize_stack_realign_flags): Likewise.
> 	* config/ia64/ia64.c (ia64_reorg): Likewise.
> 	* config/mips/mips.c (mips_annotate_pic_calls): Likewise.
> 	* config/picochip/picochip.c (reorder_var_tracking_notes): Likewise.
> 	* config/rs6000/rs6000.c (rs6000_alloc_sdmode_stack_slot): Likewise.
> 	* config/s390/s390.c (s390_regs_ever_clobbered): Likewise.
> 	* config/spu/spu.c (spu_machine_dependent_reorg): Likewise.
> 	* config/tilegx/tilegx.c (tilegx_gen_bundles,
> 	reorder_var_tracking_notes): Likewise.
> 	* config/tilepro/tilepro.c (tilepro_gen_bundles,
> 	reorder_var_tracking_notes): Likewise.
> 	* coverage.c (coverage_compute_cfg_checksum): Likewise.
> 	* cprop.c (compute_hash_table_work, compute_cprop_data,
> 	local_cprop_pass, find_implicit_sets): Likewise.
> 	* cse.c (cse_condition_code_reg): Likewise.
> 	* dce.c (prescan_insns_for_dce): Likewise.
> 	* df-core.c (df_compact_blocks): Likewise.
> 	* df-problems.c (df_word_lr_alloc): Likewise.
> 	* df-scan.c (df_scan_start_dump, df_scan_blocks, df_insn_rescan_all,
> 	df_update_entry_exit_and_calls): Likewise.
> 	* dominance.c (calculate_dominance_info, verify_dominators,
> 	debug_dominance_info): Likewise.
> 	* dse.c (dse_step5_nospill): Likewise.
> 	* except.c (finish_eh_generation): Likewise.
> 	* final.c (compute_alignments): Likewise.
> 	* function.c (thread_prologue_and_epilogue_insns,
> 	rest_of_match_asm_constraints): Likewise.
> 	* gcse.c (compute_hash_table_work, prune_expressions,
> 	compute_pre_data, compute_code_hoist_vbeinout, hoist_code,
> 	calculate_bb_reg_pressure, compute_ld_motion_mems): Likewise.
> 	* gimple-iterator.c (gsi_commit_edge_inserts): Likewise.
> 	* gimple-ssa-isolate-paths.c (find_implicit_erroneous_behaviour,
> 	find_explicit_erroneous_behaviour): Likewise.
> 	* graphite-sese-to-poly.c (rewrite_reductions_out_of_ssa,
> 	rewrite_cross_bb_scalar_deps_out_of_ssa): Likewise.
> 	* haifa-sched.c (haifa_sched_init): Likewise.
> 	* hw-doloop.c (discover_loops, set_bb_indices, reorder_loops):
> 	Likewise.
> 	* ifcvt.c (if_convert): Likewise.
> 	* init-regs.c (initialize_uninitialized_regs): Likewise.
> 	* ipa-prop.c (ipcp_transform_function): Likewise.
> 	* ipa-pure-const.c (analyze_function): Likewise.
> 	* ipa-split.c (find_split_points, execute_split_functions): Likewise.
> 	* ira-build.c (form_loop_tree): Likewise.
> 	* ira-costs.c (find_costs_and_classes): Likewise.
> 	* ira-emit.c (emit_moves, add_ranges_and_copies, ira_emit): Likewise.
> 	* ira.c (decrease_live_ranges_number, compute_regs_asm_clobbered,
> 	mark_elimination, update_equiv_regs, find_moveable_pseudos,
> 	split_live_ranges_for_shrink_wrap, allocate_initial_values): Likewise.
> 	* jump.c (mark_all_labels): Likewise.
> 	* lcm.c (compute_laterin, compute_insert_delete, compute_available,
> 	compute_nearerout, compute_rev_insert_delete): Likewise.
> 	* loop-init.c (fix_loop_structure): Likewise.
> 	* loop-invariant.c (calculate_loop_reg_pressure): Likewise.
> 	* lower-subreg.c (decompose_multiword_subregs,
> 	decompose_multiword_subregs): Likewise.
> 	* lra-assigns.c (assign_by_spills): Likewise.
> 	* lra-coalesce.c (lra_coalesce): Likewise.
> 	* lra-constraints.c (lra_inheritance, remove_inheritance_pseudos):
> 	Likewise.
> 	* lra-eliminations.c (lra_init_elimination): Likewise.
> 	* lra-spills.c (assign_spill_hard_regs, spill_pseudos,
> 	lra_final_code_change): Likewise.
> 	* lra.c (remove_scratches, check_rtl, has_nonexceptional_receiver,
> 	update_inc_notes): Likewise.
> 	* mcf.c (adjust_cfg_counts): Likewise.
> 	* mode-switching.c (optimize_mode_switching): Likewise.
> 	* modulo-sched.c (rest_of_handle_sms): Likewise.
> 	* omp-low.c (optimize_omp_library_calls, expand_omp_taskreg,
> 	expand_omp_target): Likewise.
> 	* postreload-gcse.c (alloc_mem, compute_hash_table): Likewise.
> 	* postreload.c (reload_cse_regs_1): Likewise.
> 	* predict.c (strip_predict_hints, tree_bb_level_predictions,
> 	tree_estimate_probability, expensive_function_p,
> 	estimate_bb_frequencies, compute_function_frequency): Likewise.
> 	* profile.c (is_inconsistent, compute_branch_probabilities,
> 	branch_prob): Likewise.
> 	* ree.c (find_removable_extensions): Likewise.
> 	* reg-stack.c (compensate_edges, convert_regs, reg_to_stack): Likewise.
> 	* regcprop.c (copyprop_hardreg_forward): Likewise.
> 	* reginfo.c (init_subregs_of_mode): Likewise.
> 	* regrename.c (regrename_analyze): Likewise.
> 	* regstat.c (regstat_compute_ri, regstat_compute_calls_crossed):
> 	Likewise.
> 	* reload1.c (has_nonexceptional_receiver, reload,
> 	calculate_elim_costs_all_insns): Likewise.
> 	* resource.c (init_resource_info, free_resource_info): Likewise.
> 	* sched-ebb.c (schedule_ebbs): Likewise.
> 	* sched-rgn.c (is_cfg_nonregular, find_single_block_region,
> 	haifa_find_rgns, sched_rgn_local_init): Likewise.
> 	* sel-sched-dump.c (sel_dump_cfg_2): Likewise.
> 	* sel-sched-ir.c (init_lv_sets, free_lv_sets,
> 	make_regions_from_the_rest): Likewise.
> 	* sese.c (build_sese_loop_nests, sese_build_liveouts): Likewise.
> 	* stack-ptr-mod.c (notice_stack_pointer_modification): Likewise.
> 	* store-motion.c (compute_store_table, build_store_vectors,
> 	one_store_motion_pass): Likewise.
> 	* tracer.c (tail_duplicate): Likewise.
> 	* trans-mem.c (compute_transaction_bits): Likewise.
> 	* tree-call-cdce.c (tree_call_cdce): Likewise.
> 	* tree-cfg.c (replace_loop_annotate, factor_computed_gotos,
> 	fold_cond_expr_cond, make_edges, assign_discriminators,
> 	make_abnormal_goto_edges, cleanup_dead_labels, group_case_labels,
> 	dump_cfg_stats, gimple_verify_flow_info, print_loop,
> 	execute_fixup_cfg): Likewise.
> 	* tree-cfgcleanup.c (cleanup_tree_cfg_1, merge_phi_nodes): Likewise.
> 	* tree-complex.c (init_dont_simulate_again, tree_lower_complex):
> 	Likewise.
> 	* tree-dfa.c (collect_dfa_stats, dump_enumerated_decls): Likewise.
> 	* tree-eh.c (execute_lower_resx, execute_lower_eh_dispatch,
> 	mark_reachable_handlers): Likewise.
> 	* tree-emutls.c (lower_emutls_function_body): Likewise.
> 	* tree-if-conv.c (main_tree_if_conversion): Likewise.
> 	* tree-inline.c (optimize_inline_calls): Likewise.
> 	* tree-into-ssa.c (rewrite_into_ssa, update_ssa): Likewise.
> 	* tree-nrv.c (tree_nrv, execute_return_slot_opt): Likewise.
> 	* tree-object-size.c (compute_object_sizes): Likewise.
> 	* tree-outof-ssa.c (eliminate_useless_phis, rewrite_trees,
> 	insert_backedge_copies, tree_profiling): Likewise.
> 	* tree-scalar-evolution.c (scev_const_prop): Likewise.
> 	* tree-sra.c (scan_function, sra_modify_function_body,
> 	propagate_dereference_distances, ipa_sra_modify_function_body,
> 	convert_callers): Likewise.
> 	* tree-ssa-ccp.c (ccp_initialize, execute_fold_all_builtins): Likewise.
> 	* tree-ssa-coalesce.c (build_ssa_conflict_graph): Likewise.
> 	create_outofssa_var_map, coalesce_partitions): Likewise.
> 	* tree-ssa-copy.c (init_copy_prop): Likewise.
> 	* tree-ssa-copyrename.c (rename_ssa_copies): Likewise.
> 	* tree-ssa-dce.c (find_obviously_necessary_stmts,
> 	eliminate_unnecessary_stmts): Likewise.
> 	* tree-ssa-dom.c (free_all_edge_infos, tree_ssa_dominator_optimize):
> 	Likewise.
> 	* tree-ssa-forwprop.c (ssa_forward_propagate_and_combine): Likewise.
> 	* tree-ssa-live.c (clear_unused_block_pointer, remove_unused_locals,
> 	new_tree_live_info, calculate_live_on_exit, dump_live_info,
> 	analyze_memory_references, fill_always_executed_in,
> 	tree_ssa_lim_finalize): Likewise.
> 	* tree-ssa-loop-manip.c (find_uses_to_rename, verify_loop_closed_ssa):
> 	Likewise.
> 	* tree-ssa-math-opts.c (execute_cse_reciprocals, execute_cse_sincos,
> 	execute_optimize_bswap, execute_optimize_widening_mul): Likewise.
> 	* tree-ssa-propagate.c (substitute_and_fold): Likewise.
> 	* tree-ssa-structalias.c (compute_points_to_sets): Likewise.
> 	* tree-ssa-tail-merge.c (find_same_succ, reset_cluster_vectors):
> 	Likewise.
> 	* tree-ssa-ter.c (find_replaceable_exprs): Likewise.
> 	* tree-ssa-threadupdate.c (thread_through_all_blocks): Likewise.
> 	* tree-ssa-uncprop.c (associate_equivalences_with_edges,
> 	tree_ssa_uncprop): Likewise.
> 	* tree-ssa-uninit.c (warn_uninitialized_vars,
> 	execute_late_warn_uninitialized): Likewise.
> 	* tree-ssa.c (verify_ssa, execute_update_addresses_taken): Likewise.
> 	* tree-stdarg.c (check_all_va_list_escapes, execute_optimize_stdarg):
> 	Likewise.
> 	* tree-switch-conversion.c (do_switchconv): Likewise.
> 	* tree-vect-generic.c (expand_vector_operations): Likewise.
> 	* tree-vectorizer.c (adjust_simduid_builtins, note_simd_array_uses,
> 	execute_vect_slp): Likewise.
> 	* tree-vrp.c (check_all_array_refs, remove_range_assertions,
> 	vrp_initialize, identify_jump_threads, instrument_memory_accesses):
> 	Likewise.
> 	* ubsan.c (ubsan_pass): Likewise.
> 	* value-prof.c (verify_histograms, gimple_value_profile_transformations,
> 	gimple_find_values_to_profile): Likewise.
> 	* var-tracking.c (vt_find_locations, dump_dataflow_sets, vt_emit_notes,
> 	vt_initialize, delete_debug_insns, vt_finalize): Likewise.
> 
> gcc/testsuite/
> 	* g++.dg/plugin/selfassign.c (execute_warn_self_assign): Eliminate
> 	use of FOR_EACH_BB in favor of FOR_EACH_BB_FN, to make use of cfun
> 	explicit.
> 	* gcc.dg/plugin/selfassign.c (execute_warn_self_assign): Likewise.
> ---
>  gcc/asan.c                               |  4 ++--
>  gcc/auto-inc-dec.c                       |  2 +-
>  gcc/basic-block.h                        |  2 --
>  gcc/bb-reorder.c                         | 22 +++++++++++-----------
>  gcc/cfg.c                                |  6 +++---
>  gcc/cfganal.c                            |  8 ++++----
>  gcc/cfgbuild.c                           |  8 ++++----
>  gcc/cfgcleanup.c                         |  4 ++--
>  gcc/cfgexpand.c                          |  4 ++--
>  gcc/cfgloop.c                            | 14 +++++++-------
>  gcc/cfgloopanal.c                        |  2 +-
>  gcc/cfgrtl.c                             | 22 +++++++++++-----------
>  gcc/cgraphbuild.c                        |  6 +++---
>  gcc/combine-stack-adj.c                  |  2 +-
>  gcc/combine.c                            |  8 ++++----
>  gcc/config/arm/arm.c                     |  4 ++--
>  gcc/config/bfin/bfin.c                   |  4 ++--
>  gcc/config/c6x/c6x.c                     |  6 +++---
>  gcc/config/epiphany/resolve-sw-modes.c   |  2 +-
>  gcc/config/frv/frv.c                     |  4 ++--
>  gcc/config/i386/i386.c                   |  2 +-
>  gcc/config/ia64/ia64.c                   |  2 +-
>  gcc/config/mips/mips.c                   |  2 +-
>  gcc/config/picochip/picochip.c           |  2 +-
>  gcc/config/rs6000/rs6000.c               |  2 +-
>  gcc/config/s390/s390.c                   |  2 +-
>  gcc/config/spu/spu.c                     |  2 +-
>  gcc/config/tilegx/tilegx.c               |  4 ++--
>  gcc/config/tilepro/tilepro.c             |  4 ++--
>  gcc/coverage.c                           |  2 +-
>  gcc/cprop.c                              |  8 ++++----
>  gcc/cse.c                                |  2 +-
>  gcc/dce.c                                |  2 +-
>  gcc/df-core.c                            |  8 ++++----
>  gcc/df-problems.c                        |  2 +-
>  gcc/df-scan.c                            |  8 ++++----
>  gcc/dominance.c                          |  6 +++---
>  gcc/dse.c                                |  2 +-
>  gcc/except.c                             |  2 +-
>  gcc/final.c                              |  4 ++--
>  gcc/function.c                           | 12 ++++++------
>  gcc/gcse.c                               | 16 ++++++++--------
>  gcc/gimple-iterator.c                    |  2 +-
>  gcc/gimple-ssa-isolate-paths.c           |  4 ++--
>  gcc/graphite-sese-to-poly.c              |  4 ++--
>  gcc/haifa-sched.c                        |  2 +-
>  gcc/hw-doloop.c                          |  6 +++---
>  gcc/ifcvt.c                              |  2 +-
>  gcc/init-regs.c                          |  2 +-
>  gcc/ipa-prop.c                           |  2 +-
>  gcc/ipa-pure-const.c                     |  2 +-
>  gcc/ipa-split.c                          |  4 ++--
>  gcc/ira-build.c                          |  2 +-
>  gcc/ira-costs.c                          |  2 +-
>  gcc/ira-emit.c                           | 14 +++++++-------
>  gcc/ira.c                                | 22 +++++++++++-----------
>  gcc/jump.c                               |  2 +-
>  gcc/lcm.c                                | 10 +++++-----
>  gcc/loop-init.c                          |  4 ++--
>  gcc/loop-invariant.c                     |  2 +-
>  gcc/lower-subreg.c                       |  4 ++--
>  gcc/lra-assigns.c                        |  2 +-
>  gcc/lra-coalesce.c                       |  4 ++--
>  gcc/lra-constraints.c                    |  4 ++--
>  gcc/lra-eliminations.c                   |  2 +-
>  gcc/lra-spills.c                         |  6 +++---
>  gcc/lra.c                                |  8 ++++----
>  gcc/mcf.c                                |  2 +-
>  gcc/mode-switching.c                     |  6 +++---
>  gcc/modulo-sched.c                       |  2 +-
>  gcc/omp-low.c                            |  6 +++---
>  gcc/postreload-gcse.c                    |  4 ++--
>  gcc/postreload.c                         |  2 +-
>  gcc/predict.c                            | 14 +++++++-------
>  gcc/profile.c                            |  8 ++++----
>  gcc/ree.c                                |  2 +-
>  gcc/reg-stack.c                          |  6 +++---
>  gcc/regcprop.c                           |  4 ++--
>  gcc/reginfo.c                            |  2 +-
>  gcc/regrename.c                          |  8 ++++----
>  gcc/regstat.c                            |  4 ++--
>  gcc/reload1.c                            |  8 ++++----
>  gcc/resource.c                           |  4 ++--
>  gcc/sched-ebb.c                          |  2 +-
>  gcc/sched-rgn.c                          | 26 +++++++++++++-------------
>  gcc/sel-sched-dump.c                     |  2 +-
>  gcc/sel-sched-ir.c                       | 10 +++++-----
>  gcc/sese.c                               |  6 +++---
>  gcc/stack-ptr-mod.c                      |  2 +-
>  gcc/store-motion.c                       |  6 +++---
>  gcc/testsuite/g++.dg/plugin/selfassign.c |  2 +-
>  gcc/testsuite/gcc.dg/plugin/selfassign.c |  2 +-
>  gcc/tracer.c                             |  2 +-
>  gcc/trans-mem.c                          |  2 +-
>  gcc/tree-call-cdce.c                     |  2 +-
>  gcc/tree-cfg.c                           | 28 ++++++++++++++--------------
>  gcc/tree-cfgcleanup.c                    |  4 ++--
>  gcc/tree-complex.c                       |  4 ++--
>  gcc/tree-dfa.c                           |  4 ++--
>  gcc/tree-eh.c                            |  6 +++---
>  gcc/tree-emutls.c                        |  2 +-
>  gcc/tree-if-conv.c                       |  2 +-
>  gcc/tree-inline.c                        |  2 +-
>  gcc/tree-into-ssa.c                      |  8 ++++----
>  gcc/tree-nrv.c                           |  6 +++---
>  gcc/tree-object-size.c                   |  2 +-
>  gcc/tree-outof-ssa.c                     |  6 +++---
>  gcc/tree-profile.c                       |  2 +-
>  gcc/tree-scalar-evolution.c              |  2 +-
>  gcc/tree-sra.c                           | 10 +++++-----
>  gcc/tree-ssa-ccp.c                       |  6 +++---
>  gcc/tree-ssa-coalesce.c                  |  6 +++---
>  gcc/tree-ssa-copy.c                      |  2 +-
>  gcc/tree-ssa-copyrename.c                |  4 ++--
>  gcc/tree-ssa-dce.c                       |  6 +++---
>  gcc/tree-ssa-dom.c                       |  4 ++--
>  gcc/tree-ssa-forwprop.c                  |  2 +-
>  gcc/tree-ssa-live.c                      | 18 +++++++++---------
>  gcc/tree-ssa-loop-im.c                   |  6 +++---
>  gcc/tree-ssa-loop-manip.c                |  4 ++--
>  gcc/tree-ssa-math-opts.c                 | 10 +++++-----
>  gcc/tree-ssa-propagate.c                 |  2 +-
>  gcc/tree-ssa-structalias.c               |  4 ++--
>  gcc/tree-ssa-tail-merge.c                |  4 ++--
>  gcc/tree-ssa-ter.c                       |  2 +-
>  gcc/tree-ssa-threadupdate.c              |  2 +-
>  gcc/tree-ssa-uncprop.c                   |  4 ++--
>  gcc/tree-ssa-uninit.c                    |  4 ++--
>  gcc/tree-ssa.c                           |  6 +++---
>  gcc/tree-stdarg.c                        |  6 +++---
>  gcc/tree-switch-conversion.c             |  2 +-
>  gcc/tree-vect-generic.c                  |  2 +-
>  gcc/tree-vectorizer.c                    |  6 +++---
>  gcc/tree-vrp.c                           |  8 ++++----
>  gcc/tsan.c                               |  2 +-
>  gcc/ubsan.c                              |  2 +-
>  gcc/value-prof.c                         |  6 +++---
>  gcc/var-tracking.c                       | 16 ++++++++--------
>  138 files changed, 363 insertions(+), 365 deletions(-)
> 
> diff --git a/gcc/asan.c b/gcc/asan.c
> index 09c0667..a50186c 100644
> --- a/gcc/asan.c
> +++ b/gcc/asan.c
> @@ -2043,7 +2043,7 @@ transform_statements (void)
>    gimple_stmt_iterator i;
>    int saved_last_basic_block = last_basic_block_for_fn (cfun);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        basic_block prev_bb = bb;
>  
> @@ -2557,7 +2557,7 @@ execute_sanopt (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> diff --git a/gcc/auto-inc-dec.c b/gcc/auto-inc-dec.c
> index 6006b70..be7fdf8 100644
> --- a/gcc/auto-inc-dec.c
> +++ b/gcc/auto-inc-dec.c
> @@ -1480,7 +1480,7 @@ rest_of_handle_auto_inc_dec (void)
>    reg_next_use = XCNEWVEC (rtx, max_reg);
>    reg_next_inc_use = XCNEWVEC (rtx, max_reg);
>    reg_next_def = XCNEWVEC (rtx, max_reg);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      merge_in_block (max_reg, bb);
>  
>    free (reg_next_use);
> diff --git a/gcc/basic-block.h b/gcc/basic-block.h
> index 174b650..b378a5b 100644
> --- a/gcc/basic-block.h
> +++ b/gcc/basic-block.h
> @@ -333,8 +333,6 @@ struct GTY(()) control_flow_graph {
>  #define FOR_EACH_BB_FN(BB, FN) \
>    FOR_BB_BETWEEN (BB, (FN)->cfg->x_entry_block_ptr->next_bb, (FN)->cfg->x_exit_block_ptr, next_bb)
>  
> -#define FOR_EACH_BB(BB) FOR_EACH_BB_FN (BB, cfun)
> -
>  #define FOR_EACH_BB_REVERSE_FN(BB, FN) \
>    FOR_BB_BETWEEN (BB, (FN)->cfg->x_exit_block_ptr->prev_bb, (FN)->cfg->x_entry_block_ptr, prev_bb)
>  
> diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
> index 363af2d..7f8ea07 100644
> --- a/gcc/bb-reorder.c
> +++ b/gcc/bb-reorder.c
> @@ -1566,7 +1566,7 @@ find_rarely_executed_basic_blocks_and_crossing_edges (void)
>    vec<basic_block> bbs_in_hot_partition = vNULL;
>  
>    /* Mark which partition (hot/cold) each basic block belongs in.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bool cold_bb = false;
>  
> @@ -1658,7 +1658,7 @@ find_rarely_executed_basic_blocks_and_crossing_edges (void)
>  
>    /* Mark every edge that crosses between sections.  */
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_EACH_EDGE (e, ei, bb->succs)
>        {
>  	unsigned int flags = e->flags;
> @@ -1691,7 +1691,7 @@ set_edge_can_fallthru_flag (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge e;
>        edge_iterator ei;
> @@ -1792,7 +1792,7 @@ fix_up_fall_thru_edges (void)
>    rtx old_jump;
>    rtx fall_thru_label;
>  
> -  FOR_EACH_BB (cur_bb)
> +  FOR_EACH_BB_FN (cur_bb, cfun)
>      {
>        fall_thru = NULL;
>        if (EDGE_COUNT (cur_bb->succs) > 0)
> @@ -1992,7 +1992,7 @@ fix_crossing_conditional_branches (void)
>    rtx old_label = NULL_RTX;
>    rtx new_label;
>  
> -  FOR_EACH_BB (cur_bb)
> +  FOR_EACH_BB_FN (cur_bb, cfun)
>      {
>        crossing_edge = NULL;
>        if (EDGE_COUNT (cur_bb->succs) > 0)
> @@ -2123,7 +2123,7 @@ fix_crossing_unconditional_branches (void)
>    rtx cur_insn;
>    edge succ;
>  
> -  FOR_EACH_BB (cur_bb)
> +  FOR_EACH_BB_FN (cur_bb, cfun)
>      {
>        last_insn = BB_END (cur_bb);
>  
> @@ -2201,7 +2201,7 @@ add_reg_crossing_jump_notes (void)
>    edge e;
>    edge_iterator ei;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_EACH_EDGE (e, ei, bb->succs)
>        if ((e->flags & EDGE_CROSSING)
>  	  && JUMP_P (BB_END (e->src))
> @@ -2286,7 +2286,7 @@ insert_section_boundary_note (void)
>    if (!crtl->has_bb_partition)
>      return;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (!current_partition)
>  	current_partition = BB_PARTITION (bb);
> @@ -2321,7 +2321,7 @@ rest_of_handle_reorder_blocks (void)
>    reorder_basic_blocks ();
>    cleanup_cfg (CLEANUP_EXPENSIVE);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
>        bb->aux = bb->next_bb;
>    cfg_layout_finalize ();
> @@ -2410,7 +2410,7 @@ duplicate_computed_gotos (void)
>    /* Look for blocks that end in a computed jump, and see if such blocks
>       are suitable for unfactoring.  If a block is a candidate for unfactoring,
>       mark it in the candidates.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        edge e;
> @@ -2457,7 +2457,7 @@ duplicate_computed_gotos (void)
>      goto done;
>  
>    /* Duplicate computed gotos.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (bb->flags & BB_VISITED)
>  	continue;
> diff --git a/gcc/cfg.c b/gcc/cfg.c
> index 6c3181d..4f9d769 100644
> --- a/gcc/cfg.c
> +++ b/gcc/cfg.c
> @@ -101,7 +101,7 @@ clear_edges (void)
>    edge e;
>    edge_iterator ei;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_EACH_EDGE (e, ei, bb->succs)
>  	free_edge (e);
> @@ -163,7 +163,7 @@ compact_blocks (void)
>        basic_block bb;
>  
>        i = NUM_FIXED_BLOCKS;
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  SET_BASIC_BLOCK_FOR_FN (cfun, i, bb);
>  	  bb->index = i;
> @@ -828,7 +828,7 @@ brief_dump_cfg (FILE *file, int flags)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        dump_bb_info (file, bb, 0,
>  		    flags & (TDF_COMMENT | TDF_DETAILS),
> diff --git a/gcc/cfganal.c b/gcc/cfganal.c
> index 9900d82..3371b4a 100644
> --- a/gcc/cfganal.c
> +++ b/gcc/cfganal.c
> @@ -159,7 +159,7 @@ find_unreachable_blocks (void)
>  
>    /* Clear all the reachability flags.  */
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bb->flags &= ~BB_REACHABLE;
>  
>    /* Add our starting points to the worklist.  Almost always there will
> @@ -554,7 +554,7 @@ add_noreturn_fake_exit_edges (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (EDGE_COUNT (bb->succs) == 0)
>        make_single_succ_edge (bb, EXIT_BLOCK_PTR_FOR_FN (cfun), EDGE_FAKE);
>  }
> @@ -1236,7 +1236,7 @@ compute_dominance_frontiers_1 (bitmap_head *frontiers)
>    edge p;
>    edge_iterator ei;
>    basic_block b;
> -  FOR_EACH_BB (b)
> +  FOR_EACH_BB_FN (b, cfun)
>      {
>        if (EDGE_COUNT (b->preds) >= 2)
>  	{
> @@ -1517,7 +1517,7 @@ single_pred_before_succ_order (void)
>    bitmap_clear (visited);
>  
>    MARK_VISITED (ENTRY_BLOCK_PTR_FOR_FN (cfun));
> -  FOR_EACH_BB (x)
> +  FOR_EACH_BB_FN (x, cfun)
>      {
>        if (VISITED_P (x))
>  	continue;
> diff --git a/gcc/cfgbuild.c b/gcc/cfgbuild.c
> index f73bbc5..acfc73b 100644
> --- a/gcc/cfgbuild.c
> +++ b/gcc/cfgbuild.c
> @@ -595,15 +595,15 @@ find_many_sub_basic_blocks (sbitmap blocks)
>  {
>    basic_block bb, min, max;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      SET_STATE (bb,
>  	       bitmap_bit_p (blocks, bb->index) ? BLOCK_TO_SPLIT : BLOCK_ORIGINAL);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (STATE (bb) == BLOCK_TO_SPLIT)
>        find_bb_boundaries (bb);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (STATE (bb) != BLOCK_ORIGINAL)
>        break;
>  
> @@ -640,6 +640,6 @@ find_many_sub_basic_blocks (sbitmap blocks)
>  	compute_outgoing_frequencies (bb);
>        }
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      SET_STATE (bb, 0);
>  }
> diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
> index 234e5b6..cf72c03 100644
> --- a/gcc/cfgcleanup.c
> +++ b/gcc/cfgcleanup.c
> @@ -2613,7 +2613,7 @@ try_optimize_cfg (int mode)
>  
>    crossjumps_occured = false;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      update_forwarder_flag (bb);
>  
>    if (! targetm.cannot_modify_jumps_p ())
> @@ -2955,7 +2955,7 @@ delete_dead_jumptables (void)
>  
>    /* A dead jump table does not belong to any basic block.  Scan insns
>       between two adjacent basic blocks.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn, next;
>  
> diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
> index 014f78b..56bcd80 100644
> --- a/gcc/cfgexpand.c
> +++ b/gcc/cfgexpand.c
> @@ -520,7 +520,7 @@ add_scope_conflicts (void)
>  	}
>      }
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      add_scope_conflicts_1 (bb, work, true);
>  
>    free (rpo);
> @@ -5378,7 +5378,7 @@ discover_nonconstant_array_refs (void)
>    basic_block bb;
>    gimple_stmt_iterator gsi;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        {
>  	gimple stmt = gsi_stmt (gsi);
> diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
> index 9d28950..5639e7a 100644
> --- a/gcc/cfgloop.c
> +++ b/gcc/cfgloop.c
> @@ -50,7 +50,7 @@ flow_loops_cfg_dump (FILE *file)
>    if (!file)
>      return;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge succ;
>        edge_iterator ei;
> @@ -834,7 +834,7 @@ get_loop_body (const struct loop *loop)
>        gcc_assert (loop->num_nodes == (unsigned) n_basic_blocks_for_fn (cfun));
>        body[tv++] = loop->header;
>        body[tv++] = EXIT_BLOCK_PTR_FOR_FN (cfun);
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	body[tv++] = bb;
>      }
>    else
> @@ -1082,7 +1082,7 @@ record_loop_exits (void)
>  					  loop_exit_hash, loop_exit_eq,
>  					  loop_exit_free);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_EACH_EDGE (e, ei, bb->succs)
>  	{
> @@ -1343,7 +1343,7 @@ verify_loop_structure (void)
>      verify_dominators (CDI_DOMINATORS);
>  
>    /* Check the headers.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb_loop_header_p (bb))
>        {
>  	if (bb->loop_father->header == NULL)
> @@ -1479,7 +1479,7 @@ verify_loop_structure (void)
>      {
>        /* Record old info.  */
>        irreds = sbitmap_alloc (last_basic_block_for_fn (cfun));
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  edge_iterator ei;
>  	  if (bb->flags & BB_IRREDUCIBLE_LOOP)
> @@ -1495,7 +1495,7 @@ verify_loop_structure (void)
>        mark_irreducible_loops ();
>  
>        /* Compare.  */
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  edge_iterator ei;
>  
> @@ -1578,7 +1578,7 @@ verify_loop_structure (void)
>  
>        sizes = XCNEWVEC (unsigned, num);
>        memset (sizes, 0, sizeof (unsigned) * num);
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  edge_iterator ei;
>  	  if (bb->loop_father == current_loops->tree_root)
> diff --git a/gcc/cfgloopanal.c b/gcc/cfgloopanal.c
> index 84b61c1..5e89cb1c 100644
> --- a/gcc/cfgloopanal.c
> +++ b/gcc/cfgloopanal.c
> @@ -432,7 +432,7 @@ mark_loop_exit_edges (void)
>    if (number_of_loops (cfun) <= 1)
>      return;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge_iterator ei;
>  
> diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
> index 5dc52a6..daadd9b 100644
> --- a/gcc/cfgrtl.c
> +++ b/gcc/cfgrtl.c
> @@ -416,7 +416,7 @@ compute_bb_for_insn (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx end = BB_END (bb);
>        rtx insn;
> @@ -2275,7 +2275,7 @@ find_partition_fixes (bool flag_only)
>    /* Callers check this.  */
>    gcc_checking_assert (crtl->has_bb_partition);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if ((BB_PARTITION (bb) == BB_COLD_PARTITION))
>        bbs_in_cold_partition.safe_push (bb);
>  
> @@ -2372,7 +2372,7 @@ verify_hot_cold_block_grouping (void)
>        || current_ir_type () != IR_RTL_CFGRTL)
>      return err;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (current_partition != BB_UNPARTITIONED
>            && BB_PARTITION (bb) != current_partition)
> @@ -3201,7 +3201,7 @@ purge_all_dead_edges (void)
>    int purged = false;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bool purged_here = purge_dead_edges (bb);
>  
> @@ -3226,7 +3226,7 @@ fixup_abnormal_edges (void)
>    bool inserted = false;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge e;
>        edge_iterator ei;
> @@ -3449,7 +3449,7 @@ record_effective_endpoints (void)
>      cfg_layout_function_header = NULL_RTX;
>  
>    next_insn = get_insns ();
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx end;
>  
> @@ -3479,7 +3479,7 @@ outof_cfg_layout_mode (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
>        bb->aux = bb->next_bb;
>  
> @@ -3857,7 +3857,7 @@ fixup_reorder_chain (void)
>    relink_block_chain (/*stay_in_cfglayout_mode=*/false);
>  
>    /* Annoying special case - jump around dead jumptables left in the code.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge e = find_fallthru_edge (bb->succs);
>  
> @@ -3868,7 +3868,7 @@ fixup_reorder_chain (void)
>    /* Ensure goto_locus from edges has some instructions with that locus
>       in RTL.  */
>    if (!optimize)
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        {
>          edge e;
>          edge_iterator ei;
> @@ -4047,7 +4047,7 @@ force_one_exit_fallthru (void)
>  
>    /* Fix up the chain of blocks -- make FORWARDER immediately precede the
>       exit block.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (bb->aux == NULL && bb != forwarder)
>  	{
> @@ -4258,7 +4258,7 @@ break_superblocks (void)
>    superblocks = sbitmap_alloc (last_basic_block_for_fn (cfun));
>    bitmap_clear (superblocks);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb->flags & BB_SUPERBLOCK)
>        {
>  	bb->flags &= ~BB_SUPERBLOCK;
> diff --git a/gcc/cgraphbuild.c b/gcc/cgraphbuild.c
> index 6c6698b..429dc8e 100644
> --- a/gcc/cgraphbuild.c
> +++ b/gcc/cgraphbuild.c
> @@ -317,7 +317,7 @@ build_cgraph_edges (void)
>  
>    /* Create the callgraph edges and record the nodes referenced by the function.
>       body.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	{
> @@ -451,7 +451,7 @@ rebuild_cgraph_edges (void)
>  
>    node->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	{
> @@ -505,7 +505,7 @@ cgraph_rebuild_references (void)
>  
>    node->count = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	ipa_record_stmt_references (node, gsi_stmt (gsi));
> diff --git a/gcc/combine-stack-adj.c b/gcc/combine-stack-adj.c
> index 5ca131f..5c897cf 100644
> --- a/gcc/combine-stack-adj.c
> +++ b/gcc/combine-stack-adj.c
> @@ -95,7 +95,7 @@ combine_stack_adjustments (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      combine_stack_adjustments_for_block (bb);
>  }
>  
> diff --git a/gcc/combine.c b/gcc/combine.c
> index c7eb5e5..dea6c28 100644
> --- a/gcc/combine.c
> +++ b/gcc/combine.c
> @@ -960,7 +960,7 @@ delete_noop_moves (void)
>    rtx insn, next;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (insn = BB_HEAD (bb); insn != NEXT_INSN (BB_END (bb)); insn = next)
>  	{
> @@ -997,7 +997,7 @@ create_log_links (void)
>       usage -- these are taken from original flow.c did. Don't ask me why it is
>       done this way; I don't know and if it works, I don't want to know.  */
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_BB_INSNS_REVERSE (bb, insn)
>          {
> @@ -1160,7 +1160,7 @@ combine_instructions (rtx f, unsigned int nregs)
>    last_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
>  
>    create_log_links ();
> -  FOR_EACH_BB (this_basic_block)
> +  FOR_EACH_BB_FN (this_basic_block, cfun)
>      {
>        optimize_this_for_speed_p = optimize_bb_for_speed_p (this_basic_block);
>        last_call_luid = 0;
> @@ -1211,7 +1211,7 @@ combine_instructions (rtx f, unsigned int nregs)
>    setup_incoming_promotions (first);
>    last_bb = ENTRY_BLOCK_PTR_FOR_FN (cfun);
>  
> -  FOR_EACH_BB (this_basic_block)
> +  FOR_EACH_BB_FN (this_basic_block, cfun)
>      {
>        rtx last_combined_insn = NULL_RTX;
>        optimize_this_for_speed_p = optimize_bb_for_speed_p (this_basic_block);
> diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
> index b3a81b0..268e560 100644
> --- a/gcc/config/arm/arm.c
> +++ b/gcc/config/arm/arm.c
> @@ -16548,7 +16548,7 @@ thumb1_reorg (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx dest, src;
>        rtx pat, op0, set = NULL;
> @@ -16626,7 +16626,7 @@ thumb2_reorg (void)
>    compute_bb_for_insn ();
>    df_analyze ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>  
> diff --git a/gcc/config/bfin/bfin.c b/gcc/config/bfin/bfin.c
> index a1adf80..c15451c 100644
> --- a/gcc/config/bfin/bfin.c
> +++ b/gcc/config/bfin/bfin.c
> @@ -3957,7 +3957,7 @@ static void
>  bfin_gen_bundles (void)
>  {
>    basic_block bb;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn, next;
>        rtx slot[3];
> @@ -4036,7 +4036,7 @@ static void
>  reorder_var_tracking_notes (void)
>  {
>    basic_block bb;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn, next;
>        rtx queue = NULL_RTX;
> diff --git a/gcc/config/c6x/c6x.c b/gcc/config/c6x/c6x.c
> index af310ba..6f80bc8 100644
> --- a/gcc/config/c6x/c6x.c
> +++ b/gcc/config/c6x/c6x.c
> @@ -4629,7 +4629,7 @@ c6x_gen_bundles (void)
>    basic_block bb;
>    rtx insn, next, last_call;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn, next;
>        /* The machine is eight insns wide.  We can have up to six shadow
> @@ -5383,7 +5383,7 @@ conditionalize_after_sched (void)
>  {
>    basic_block bb;
>    rtx insn;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        {
>  	unsigned uid = INSN_UID (insn);
> @@ -5959,7 +5959,7 @@ c6x_reorg (void)
>  
>    if (c6x_flag_schedule_insns2)
>      {
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	if ((bb->flags & BB_DISABLE_SCHEDULE) == 0)
>  	  assign_reservations (BB_HEAD (bb), BB_END (bb));
>      }
> diff --git a/gcc/config/epiphany/resolve-sw-modes.c b/gcc/config/epiphany/resolve-sw-modes.c
> index a780254..30f6920 100644
> --- a/gcc/config/epiphany/resolve-sw-modes.c
> +++ b/gcc/config/epiphany/resolve-sw-modes.c
> @@ -69,7 +69,7 @@ resolve_sw_modes (void)
>        df_note_add_problem ();
>        df_analyze ();
>      }
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        {
>  	enum attr_fp_mode selected_mode;
> diff --git a/gcc/config/frv/frv.c b/gcc/config/frv/frv.c
> index a5aeb75..3755e62 100644
> --- a/gcc/config/frv/frv.c
> +++ b/gcc/config/frv/frv.c
> @@ -8070,11 +8070,11 @@ frv_optimize_membar (void)
>    first_io = XCNEWVEC (struct frv_io, last_basic_block_for_fn (cfun));
>    last_membar = XCNEWVEC (rtx, last_basic_block_for_fn (cfun));
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      frv_optimize_membar_local (bb, &first_io[bb->index],
>  			       &last_membar[bb->index]);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (last_membar[bb->index] != 0)
>        frv_optimize_membar_global (bb, first_io, last_membar[bb->index]);
>  
> diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
> index 0f6612d..aa9694f 100644
> --- a/gcc/config/i386/i386.c
> +++ b/gcc/config/i386/i386.c
> @@ -10481,7 +10481,7 @@ ix86_finalize_stack_realign_flags (void)
>        add_to_hard_reg_set (&set_up_by_prologue, Pmode, ARG_POINTER_REGNUM);
>        add_to_hard_reg_set (&set_up_by_prologue, Pmode,
>  			   HARD_FRAME_POINTER_REGNUM);
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>          {
>            rtx insn;
>  	  FOR_BB_INSNS (bb, insn)
> diff --git a/gcc/config/ia64/ia64.c b/gcc/config/ia64/ia64.c
> index 8f305c1..a837974 100644
> --- a/gcc/config/ia64/ia64.c
> +++ b/gcc/config/ia64/ia64.c
> @@ -9688,7 +9688,7 @@ ia64_reorg (void)
>  
>        /* We can't let modulo-sched prevent us from scheduling any bbs,
>  	 since we need the final schedule to produce bundle information.  */
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	bb->flags &= ~BB_DISABLE_SCHEDULE;
>  
>        initiate_bundle_states ();
> diff --git a/gcc/config/mips/mips.c b/gcc/config/mips/mips.c
> index f19478c..e65dc6b 100644
> --- a/gcc/config/mips/mips.c
> +++ b/gcc/config/mips/mips.c
> @@ -15332,7 +15332,7 @@ mips_annotate_pic_calls (void)
>    basic_block bb;
>    rtx insn;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>      {
>        rtx call, reg, symbol, second_call;
> diff --git a/gcc/config/picochip/picochip.c b/gcc/config/picochip/picochip.c
> index 4756cb7..8861ffc 100644
> --- a/gcc/config/picochip/picochip.c
> +++ b/gcc/config/picochip/picochip.c
> @@ -3174,7 +3174,7 @@ reorder_var_tracking_notes (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn, next, last_insn = NULL_RTX;
>        rtx queue = NULL_RTX;
> diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c
> index 599cf49..1db97fa 100644
> --- a/gcc/config/rs6000/rs6000.c
> +++ b/gcc/config/rs6000/rs6000.c
> @@ -16395,7 +16395,7 @@ rs6000_alloc_sdmode_stack_slot (void)
>    if (TARGET_NO_SDMODE_STACK)
>      return;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        {
>  	tree ret = walk_gimple_op (gsi_stmt (gsi), rs6000_check_sdmode, NULL);
> diff --git a/gcc/config/s390/s390.c b/gcc/config/s390/s390.c
> index fcd7532..f9b7cd0 100644
> --- a/gcc/config/s390/s390.c
> +++ b/gcc/config/s390/s390.c
> @@ -7458,7 +7458,7 @@ s390_regs_ever_clobbered (char regs_ever_clobbered[])
>        if (!call_really_used_regs[i])
>  	regs_ever_clobbered[i] = 1;
>  
> -  FOR_EACH_BB (cur_bb)
> +  FOR_EACH_BB_FN (cur_bb, cfun)
>      {
>        FOR_BB_INSNS (cur_bb, cur_insn)
>  	{
> diff --git a/gcc/config/spu/spu.c b/gcc/config/spu/spu.c
> index 1a9895e..66209b6 100644
> --- a/gcc/config/spu/spu.c
> +++ b/gcc/config/spu/spu.c
> @@ -2645,7 +2645,7 @@ spu_machine_dependent_reorg (void)
>      find_many_sub_basic_blocks (blocks);
>  
>    /* We have to schedule to make sure alignment is ok. */
> -  FOR_EACH_BB (bb) bb->flags &= ~BB_DISABLE_SCHEDULE;
> +  FOR_EACH_BB_FN (bb, cfun) bb->flags &= ~BB_DISABLE_SCHEDULE;
>  
>    /* The hints need to be scheduled, so call it again. */
>    schedule_insns ();
> diff --git a/gcc/config/tilegx/tilegx.c b/gcc/config/tilegx/tilegx.c
> index c2f9e07..eecc9a9 100644
> --- a/gcc/config/tilegx/tilegx.c
> +++ b/gcc/config/tilegx/tilegx.c
> @@ -4383,7 +4383,7 @@ static void
>  tilegx_gen_bundles (void)
>  {
>    basic_block bb;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn, next;
>        rtx end = NEXT_INSN (BB_END (bb));
> @@ -4709,7 +4709,7 @@ static void
>  reorder_var_tracking_notes (void)
>  {
>    basic_block bb;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>    {
>      rtx insn, next;
>      rtx queue = NULL_RTX;
> diff --git a/gcc/config/tilepro/tilepro.c b/gcc/config/tilepro/tilepro.c
> index 31bc490..b2bafb4 100644
> --- a/gcc/config/tilepro/tilepro.c
> +++ b/gcc/config/tilepro/tilepro.c
> @@ -3988,7 +3988,7 @@ static void
>  tilepro_gen_bundles (void)
>  {
>    basic_block bb;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>    {
>      rtx insn, next;
>      rtx end = NEXT_INSN (BB_END (bb));
> @@ -4259,7 +4259,7 @@ static void
>  reorder_var_tracking_notes (void)
>  {
>    basic_block bb;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>    {
>      rtx insn, next;
>      rtx queue = NULL_RTX;
> diff --git a/gcc/coverage.c b/gcc/coverage.c
> index f2ac5fc..f7a2924 100644
> --- a/gcc/coverage.c
> +++ b/gcc/coverage.c
> @@ -588,7 +588,7 @@ coverage_compute_cfg_checksum (void)
>    basic_block bb;
>    unsigned chksum = n_basic_blocks_for_fn (cfun);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge e;
>        edge_iterator ei;
> diff --git a/gcc/cprop.c b/gcc/cprop.c
> index 600c617..7d07246 100644
> --- a/gcc/cprop.c
> +++ b/gcc/cprop.c
> @@ -400,7 +400,7 @@ compute_hash_table_work (struct hash_table_d *table)
>    /* Allocate vars to track sets of regs.  */
>    reg_set_bitmap = ALLOC_REG_SET (NULL);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>  
> @@ -649,7 +649,7 @@ compute_cprop_data (void)
>       aren't recorded for the local pass so they cannot be propagated within
>       their basic block by this pass and 2) the global pass would otherwise
>       propagate them only in the successors of their basic block.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        int index = implicit_set_indexes[bb->index];
>        if (index != -1)
> @@ -1234,7 +1234,7 @@ local_cprop_pass (void)
>    unsigned i;
>  
>    cselib_init (0);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_BB_INSNS (bb, insn)
>  	{
> @@ -1359,7 +1359,7 @@ find_implicit_sets (void)
>  
>    implicit_sets = XCNEWVEC (rtx, implicit_sets_size);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* Check for more than one successor.  */
>        if (EDGE_COUNT (bb->succs) <= 1)
> diff --git a/gcc/cse.c b/gcc/cse.c
> index 74ae8ba..0e28f48 100644
> --- a/gcc/cse.c
> +++ b/gcc/cse.c
> @@ -7335,7 +7335,7 @@ cse_condition_code_reg (void)
>    else
>      cc_reg_2 = NULL_RTX;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx last_insn;
>        rtx cc_reg;
> diff --git a/gcc/dce.c b/gcc/dce.c
> index 07d31f7..3101102 100644
> --- a/gcc/dce.c
> +++ b/gcc/dce.c
> @@ -623,7 +623,7 @@ prescan_insns_for_dce (bool fast)
>    if (!df_in_progress && ACCUMULATE_OUTGOING_ARGS)
>      arg_stores = BITMAP_ALLOC (NULL);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_BB_INSNS_REVERSE_SAFE (bb, insn, prev)
>  	if (NONDEBUG_INSN_P (insn))
> diff --git a/gcc/df-core.c b/gcc/df-core.c
> index d41fb72..ba57d39 100644
> --- a/gcc/df-core.c
> +++ b/gcc/df-core.c
> @@ -1543,7 +1543,7 @@ df_compact_blocks (void)
>  	    bitmap_set_bit (dflow->out_of_date_transfer_functions, EXIT_BLOCK);
>  
>  	  i = NUM_FIXED_BLOCKS;
> -	  FOR_EACH_BB (bb)
> +	  FOR_EACH_BB_FN (bb, cfun)
>  	    {
>  	      if (bitmap_bit_p (&tmp, bb->index))
>  		bitmap_set_bit (dflow->out_of_date_transfer_functions, i);
> @@ -1564,7 +1564,7 @@ df_compact_blocks (void)
>  	     place in the block_info vector.  Null out the copied
>  	     item.  The entry and exit blocks never move.  */
>  	  i = NUM_FIXED_BLOCKS;
> -	  FOR_EACH_BB (bb)
> +	  FOR_EACH_BB_FN (bb, cfun)
>  	    {
>  	      df_set_bb_info (dflow, i,
>  			      (char *)problem_temps
> @@ -1590,7 +1590,7 @@ df_compact_blocks (void)
>        bitmap_copy (&tmp, df->blocks_to_analyze);
>        bitmap_clear (df->blocks_to_analyze);
>        i = NUM_FIXED_BLOCKS;
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  if (bitmap_bit_p (&tmp, bb->index))
>  	    bitmap_set_bit (df->blocks_to_analyze, i);
> @@ -1601,7 +1601,7 @@ df_compact_blocks (void)
>    bitmap_clear (&tmp);
>  
>    i = NUM_FIXED_BLOCKS;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        SET_BASIC_BLOCK_FOR_FN (cfun, i, bb);
>        bb->index = i;
> diff --git a/gcc/df-problems.c b/gcc/df-problems.c
> index ab19372..70f7254 100644
> --- a/gcc/df-problems.c
> +++ b/gcc/df-problems.c
> @@ -2427,7 +2427,7 @@ df_word_lr_alloc (bitmap all_blocks ATTRIBUTE_UNUSED)
>  
>    bitmap_obstack_initialize (&problem_data->word_lr_bitmaps);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bitmap_set_bit (df_word_lr->out_of_date_transfer_functions, bb->index);
>  
>    bitmap_set_bit (df_word_lr->out_of_date_transfer_functions, ENTRY_BLOCK);
> diff --git a/gcc/df-scan.c b/gcc/df-scan.c
> index 5f0ba4a..9f6f67a 100644
> --- a/gcc/df-scan.c
> +++ b/gcc/df-scan.c
> @@ -449,7 +449,7 @@ df_scan_start_dump (FILE *file ATTRIBUTE_UNUSED)
>  	fprintf (file, "} ");
>        }
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        if (INSN_P (insn))
>  	{
> @@ -673,7 +673,7 @@ df_scan_blocks (void)
>    df_set_bb_dirty (BASIC_BLOCK_FOR_FN (cfun, EXIT_BLOCK));
>  
>    /* Regular blocks */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        unsigned int bb_index = bb->index;
>        df_bb_refs_record (bb_index, true);
> @@ -1415,7 +1415,7 @@ df_insn_rescan_all (void)
>    bitmap_clear (&df->insns_to_rescan);
>    bitmap_clear (&df->insns_to_notes_rescan);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        FOR_BB_INSNS (bb, insn)
> @@ -4154,7 +4154,7 @@ df_update_entry_exit_and_calls (void)
>  
>    /* The call insns need to be rescanned because there may be changes
>       in the set of registers clobbered across the call.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        FOR_BB_INSNS (bb, insn)
> diff --git a/gcc/dominance.c b/gcc/dominance.c
> index af73078..521b224 100644
> --- a/gcc/dominance.c
> +++ b/gcc/dominance.c
> @@ -662,7 +662,7 @@ calculate_dominance_info (enum cdi_direction dir)
>        calc_dfs_tree (&di, reverse);
>        calc_idoms (&di, reverse);
>  
> -      FOR_EACH_BB (b)
> +      FOR_EACH_BB_FN (b, cfun)
>  	{
>  	  TBB d = di.dom[di.dfs_order[b->index]];
>  
> @@ -1025,7 +1025,7 @@ verify_dominators (enum cdi_direction dir)
>    calc_dfs_tree (&di, reverse);
>    calc_idoms (&di, reverse);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        imm_bb = get_immediate_dominator (dir, bb);
>        if (!imm_bb)
> @@ -1492,7 +1492,7 @@ DEBUG_FUNCTION void
>  debug_dominance_info (enum cdi_direction dir)
>  {
>    basic_block bb, bb2;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if ((bb2 = get_immediate_dominator (dir, bb)))
>        fprintf (stderr, "%i %i\n", bb->index, bb2->index);
>  }
> diff --git a/gcc/dse.c b/gcc/dse.c
> index a926cb8..e5b0850 100644
> --- a/gcc/dse.c
> +++ b/gcc/dse.c
> @@ -3507,7 +3507,7 @@ static void
>  dse_step5_nospill (void)
>  {
>    basic_block bb;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bb_info_t bb_info = bb_table[bb->index];
>        insn_info_t insn_info = bb_info->last_insn;
> diff --git a/gcc/except.c b/gcc/except.c
> index e4b8cad..cf4fd14 100644
> --- a/gcc/except.c
> +++ b/gcc/except.c
> @@ -1511,7 +1511,7 @@ finish_eh_generation (void)
>      commit_edge_insertions ();
>  
>    /* Redirect all EH edges from the post_landing_pad to the landing pad.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        eh_landing_pad lp;
>        edge_iterator ei;
> diff --git a/gcc/final.c b/gcc/final.c
> index 2ab6a4d..f475d27 100644
> --- a/gcc/final.c
> +++ b/gcc/final.c
> @@ -700,14 +700,14 @@ compute_alignments (void)
>        flow_loops_dump (dump_file, NULL, 1);
>      }
>    loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb->frequency > freq_max)
>        freq_max = bb->frequency;
>    freq_threshold = freq_max / PARAM_VALUE (PARAM_ALIGN_THRESHOLD);
>  
>    if (dump_file)
>      fprintf (dump_file, "freq_max: %i\n",freq_max);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx label = BB_HEAD (bb);
>        int fallthru_frequency = 0, branch_frequency = 0, has_fallthru = 0;
> diff --git a/gcc/function.c b/gcc/function.c
> index d257af4..e00f583 100644
> --- a/gcc/function.c
> +++ b/gcc/function.c
> @@ -6043,7 +6043,7 @@ thread_prologue_and_epilogue_insns (void)
>        max_grow_size = get_uncond_jump_length ();
>        max_grow_size *= PARAM_VALUE (PARAM_MAX_GROW_COPY_BB_INSNS);
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  rtx insn;
>  	  unsigned size = 0;
> @@ -6120,7 +6120,7 @@ thread_prologue_and_epilogue_insns (void)
>  	 needing a prologue.  */
>        bitmap_clear (&bb_on_list);
>        bitmap_and_compl (&bb_antic_flags, &bb_flags, &bb_tail);
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  if (!bitmap_bit_p (&bb_antic_flags, bb->index))
>  	    continue;
> @@ -6154,7 +6154,7 @@ thread_prologue_and_epilogue_insns (void)
>        /* Find exactly one edge that leads to a block in ANTIC from
>  	 a block that isn't.  */
>        if (!bitmap_bit_p (&bb_antic_flags, entry_edge->dest->index))
> -	FOR_EACH_BB (bb)
> +	FOR_EACH_BB_FN (bb, cfun)
>  	  {
>  	    if (!bitmap_bit_p (&bb_antic_flags, bb->index))
>  	      continue;
> @@ -6202,7 +6202,7 @@ thread_prologue_and_epilogue_insns (void)
>  	  /* Find tail blocks reachable from both blocks needing a
>  	     prologue and blocks not needing a prologue.  */
>  	  if (!bitmap_empty_p (&bb_tail))
> -	    FOR_EACH_BB (bb)
> +	    FOR_EACH_BB_FN (bb, cfun)
>  	      {
>  		bool some_pro, some_no_pro;
>  		if (!bitmap_bit_p (&bb_tail, bb->index))
> @@ -6480,7 +6480,7 @@ thread_prologue_and_epilogue_insns (void)
>  	 we take advantage of cfg_layout_finalize using
>  	 fixup_fallthru_exit_predecessor.  */
>        cfg_layout_initialize (0);
> -      FOR_EACH_BB (cur_bb)
> +      FOR_EACH_BB_FN (cur_bb, cfun)
>  	if (cur_bb->index >= NUM_FIXED_BLOCKS
>  	    && cur_bb->next_bb->index >= NUM_FIXED_BLOCKS)
>  	  cur_bb->aux = cur_bb->next_bb;
> @@ -7192,7 +7192,7 @@ rest_of_match_asm_constraints (void)
>      return 0;
>  
>    df_set_flags (DF_DEFER_INSN_RESCAN);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_BB_INSNS (bb, insn)
>  	{
> diff --git a/gcc/gcse.c b/gcc/gcse.c
> index fa25a46..a6874ab 100644
> --- a/gcc/gcse.c
> +++ b/gcc/gcse.c
> @@ -1559,7 +1559,7 @@ compute_hash_table_work (struct hash_table_d *table)
>    for (i = 0; i < max_reg_num (); ++i)
>      reg_avail_info[i].last_bb = NULL;
>  
> -  FOR_EACH_BB (current_bb)
> +  FOR_EACH_BB_FN (current_bb, cfun)
>      {
>        rtx insn;
>        unsigned int regno;
> @@ -1899,7 +1899,7 @@ prune_expressions (bool pre_p)
>  	}
>      }
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge e;
>        edge_iterator ei;
> @@ -2020,7 +2020,7 @@ compute_pre_data (void)
>       ~(TRANSP | COMP)
>    */
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bitmap_ior (ae_kill[bb->index], transp[bb->index], comp[bb->index]);
>        bitmap_not (ae_kill[bb->index], ae_kill[bb->index]);
> @@ -2855,7 +2855,7 @@ compute_code_hoist_vbeinout (void)
>      {
>        fprintf (dump_file, "hoisting vbeinout computation: %d passes\n", passes);
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>          {
>  	  fprintf (dump_file, "vbein (%d): ", bb->index);
>  	  dump_bitmap_file (dump_file, hoist_vbein[bb->index]);
> @@ -3169,7 +3169,7 @@ hoist_code (void)
>    to_bb_head = XCNEWVEC (int, get_max_uid ());
>    bb_size = XCNEWVEC (int, last_basic_block_for_fn (cfun));
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        int to_head;
> @@ -3512,7 +3512,7 @@ calculate_bb_reg_pressure (void)
>  
>    ira_setup_eliminable_regset ();
>    curr_regs_live = BITMAP_ALLOC (&reg_obstack);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        curr_bb = bb;
>        BB_DATA (bb)->live_in = BITMAP_ALLOC (NULL);
> @@ -3562,7 +3562,7 @@ calculate_bb_reg_pressure (void)
>      return;
>  
>    fprintf (dump_file, "\nRegister Pressure: \n");
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        fprintf (dump_file, "  Basic block %d: \n", bb->index);
>        for (i = 0; (int) i < ira_pressure_classes_num; i++)
> @@ -3888,7 +3888,7 @@ compute_ld_motion_mems (void)
>    pre_ldst_mems = NULL;
>    pre_ldst_table.create (13);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_BB_INSNS (bb, insn)
>  	{
> diff --git a/gcc/gimple-iterator.c b/gcc/gimple-iterator.c
> index 9f51e6c..2460c61 100644
> --- a/gcc/gimple-iterator.c
> +++ b/gcc/gimple-iterator.c
> @@ -839,7 +839,7 @@ gsi_commit_edge_inserts (void)
>    gsi_commit_one_edge_insert (single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun)),
>  			      NULL);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_EACH_EDGE (e, ei, bb->succs)
>        gsi_commit_one_edge_insert (e, NULL);
>  }
> diff --git a/gcc/gimple-ssa-isolate-paths.c b/gcc/gimple-ssa-isolate-paths.c
> index 052bf3f..aaa7537 100644
> --- a/gcc/gimple-ssa-isolate-paths.c
> +++ b/gcc/gimple-ssa-isolate-paths.c
> @@ -216,7 +216,7 @@ find_implicit_erroneous_behaviour (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator si;
>  
> @@ -304,7 +304,7 @@ find_explicit_erroneous_behaviour (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator si;
>  
> diff --git a/gcc/graphite-sese-to-poly.c b/gcc/graphite-sese-to-poly.c
> index 975db63..66c1b6e 100644
> --- a/gcc/graphite-sese-to-poly.c
> +++ b/gcc/graphite-sese-to-poly.c
> @@ -2295,7 +2295,7 @@ rewrite_reductions_out_of_ssa (scop_p scop)
>    gimple_stmt_iterator psi;
>    sese region = SCOP_REGION (scop);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb_in_sese_p (bb, region))
>        for (psi = gsi_start_phis (bb); !gsi_end_p (psi);)
>  	{
> @@ -2489,7 +2489,7 @@ rewrite_cross_bb_scalar_deps_out_of_ssa (scop_p scop)
>    /* Create an extra empty BB after the scop.  */
>    split_edge (SESE_EXIT (region));
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb_in_sese_p (bb, region))
>        for (psi = gsi_start_bb (bb); !gsi_end_p (psi); gsi_next (&psi))
>  	changed |= rewrite_cross_bb_scalar_deps (scop, &psi);
> diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
> index d5e3309..4f3b054 100644
> --- a/gcc/haifa-sched.c
> +++ b/gcc/haifa-sched.c
> @@ -6709,7 +6709,7 @@ haifa_sched_init (void)
>  
>      sched_init_bbs ();
>  
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        bbs.quick_push (bb);
>      sched_init_luids (bbs);
>      sched_deps_init (true);
> diff --git a/gcc/hw-doloop.c b/gcc/hw-doloop.c
> index 77c8149..b6184a2 100644
> --- a/gcc/hw-doloop.c
> +++ b/gcc/hw-doloop.c
> @@ -357,7 +357,7 @@ discover_loops (bitmap_obstack *loop_stack, struct hw_doloop_hooks *hooks)
>    /* Find all the possible loop tails.  This means searching for every
>       loop_end instruction.  For each one found, create a hwloop_info
>       structure and add the head block to the work list. */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx tail = BB_END (bb);
>        rtx insn, reg;
> @@ -480,7 +480,7 @@ set_bb_indices (void)
>    intptr_t index;
>  
>    index = 0;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bb->aux = (void *) index++;
>  }
>  
> @@ -537,7 +537,7 @@ reorder_loops (hwloop_info loops)
>        loops = loops->next;
>      }
>    
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
>  	bb->aux = bb->next_bb;
> diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c
> index ac0276c..543a70d 100644
> --- a/gcc/ifcvt.c
> +++ b/gcc/ifcvt.c
> @@ -4408,7 +4408,7 @@ if_convert (bool after_combine)
>  	fprintf (dump_file, "\n\n========== Pass %d ==========\n", pass);
>  #endif
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>            basic_block new_bb;
>            while (!df_get_bb_dirty (bb)
> diff --git a/gcc/init-regs.c b/gcc/init-regs.c
> index 2a15b3e..d26ee9b 100644
> --- a/gcc/init-regs.c
> +++ b/gcc/init-regs.c
> @@ -59,7 +59,7 @@ initialize_uninitialized_regs (void)
>  
>    df_analyze ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        bitmap lr = DF_LR_IN (bb);
> diff --git a/gcc/ipa-prop.c b/gcc/ipa-prop.c
> index 83dc53e..7b16b7e 100644
> --- a/gcc/ipa-prop.c
> +++ b/gcc/ipa-prop.c
> @@ -4726,7 +4726,7 @@ ipcp_transform_function (struct cgraph_node *node)
>    descriptors.safe_grow_cleared (param_count);
>    ipa_populate_param_decls (node, descriptors);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        {
>  	struct ipa_agg_replacement_value *v;
> diff --git a/gcc/ipa-pure-const.c b/gcc/ipa-pure-const.c
> index d84b35f..a60e078 100644
> --- a/gcc/ipa-pure-const.c
> +++ b/gcc/ipa-pure-const.c
> @@ -754,7 +754,7 @@ analyze_function (struct cgraph_node *fn, bool ipa)
>  
>    push_cfun (DECL_STRUCT_FUNCTION (decl));
>  
> -  FOR_EACH_BB (this_block)
> +  FOR_EACH_BB_FN (this_block, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        struct walk_stmt_info wi;
> diff --git a/gcc/ipa-split.c b/gcc/ipa-split.c
> index d5dfb8d..390adf1 100644
> --- a/gcc/ipa-split.c
> +++ b/gcc/ipa-split.c
> @@ -1070,7 +1070,7 @@ find_split_points (int overall_time, int overall_size)
>          stack.pop ();
>      }
>    ENTRY_BLOCK_PTR_FOR_FN (cfun)->aux = NULL;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bb->aux = NULL;
>    stack.release ();
>    BITMAP_FREE (current.ssa_names_to_pass);
> @@ -1595,7 +1595,7 @@ execute_split_functions (void)
>    /* Compute local info about basic blocks and determine function size/time.  */
>    bb_info_vec.safe_grow_cleared (last_basic_block_for_fn (cfun) + 1);
>    memset (&best_split_point, 0, sizeof (best_split_point));
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        int time = 0;
>        int size = 0;
> diff --git a/gcc/ira-build.c b/gcc/ira-build.c
> index f9258ee..660fb0d 100644
> --- a/gcc/ira-build.c
> +++ b/gcc/ira-build.c
> @@ -341,7 +341,7 @@ form_loop_tree (void)
>    /* We can not use loop/bb node access macros because of potential
>       checking and because the nodes are not initialized enough
>       yet.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bb_node = &ira_bb_nodes[bb->index];
>        bb_node->bb = bb;
> diff --git a/gcc/ira-costs.c b/gcc/ira-costs.c
> index d7299e6..c8d64d5 100644
> --- a/gcc/ira-costs.c
> +++ b/gcc/ira-costs.c
> @@ -1585,7 +1585,7 @@ find_costs_and_classes (FILE *dump_file)
>  	{
>  	  basic_block bb;
>  
> -	  FOR_EACH_BB (bb)
> +	  FOR_EACH_BB_FN (bb, cfun)
>  	    process_bb_for_costs (bb);
>  	}
>  
> diff --git a/gcc/ira-emit.c b/gcc/ira-emit.c
> index d59461b..196efa0 100644
> --- a/gcc/ira-emit.c
> +++ b/gcc/ira-emit.c
> @@ -986,7 +986,7 @@ emit_moves (void)
>    edge e;
>    rtx insns, tmp;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (at_bb_start[bb->index] != NULL)
>  	{
> @@ -1203,7 +1203,7 @@ add_ranges_and_copies (void)
>    bitmap live_through;
>  
>    live_through = ira_allocate_bitmap ();
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* It does not matter what loop_tree_node (of source or
>  	 destination block) to use for searching allocnos by their
> @@ -1260,7 +1260,7 @@ ira_emit (bool loops_p)
>    ira_free_bitmap (renamed_regno_bitmap);
>    ira_free_bitmap (local_allocno_bitmap);
>    setup_entered_from_non_parent_p ();
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        at_bb_start[bb->index] = NULL;
>        at_bb_end[bb->index] = NULL;
> @@ -1275,15 +1275,15 @@ ira_emit (bool loops_p)
>    memset (allocno_last_set_check, 0, sizeof (int) * max_reg_num ());
>    memset (hard_regno_last_set_check, 0, sizeof (hard_regno_last_set_check));
>    curr_tick = 0;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      unify_moves (bb, true);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      unify_moves (bb, false);
>    move_vec.create (ira_allocnos_num);
>    emit_moves ();
>    add_ranges_and_copies ();
>    /* Clean up: */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        free_move_list (at_bb_start[bb->index]);
>        free_move_list (at_bb_end[bb->index]);
> @@ -1301,7 +1301,7 @@ ira_emit (bool loops_p)
>       reload assumes initial insn codes defined.  The insn codes can be
>       invalidated by CFG infrastructure for example in jump
>       redirection.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS_REVERSE (bb, insn)
>        if (INSN_P (insn))
>  	recog_memoized (insn);
> diff --git a/gcc/ira.c b/gcc/ira.c
> index ae35035..b4ae0ca 100644
> --- a/gcc/ira.c
> +++ b/gcc/ira.c
> @@ -2135,7 +2135,7 @@ decrease_live_ranges_number (void)
>    if (ira_dump_file)
>      fprintf (ira_dump_file, "Starting decreasing number of live ranges...\n");
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        {
>  	set = single_set (insn);
> @@ -2358,7 +2358,7 @@ compute_regs_asm_clobbered (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        FOR_BB_INSNS_REVERSE (bb, insn)
> @@ -2951,7 +2951,7 @@ mark_elimination (int from, int to)
>    basic_block bb;
>    bitmap r;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        r = DF_LR_IN (bb);
>        if (bitmap_bit_p (r, from))
> @@ -3473,7 +3473,7 @@ update_equiv_regs (void)
>       paradoxical subreg. Don't set such reg sequivalent to a mem,
>       because lra will not substitute such equiv memory in order to
>       prevent access beyond allocated memory for paradoxical memory subreg.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        if (NONDEBUG_INSN_P (insn))
>  	for_each_rtx (&insn, set_paradoxical_subreg, (void *) pdx_subregs);
> @@ -3481,7 +3481,7 @@ update_equiv_regs (void)
>    /* Scan the insns and find which registers have equivalences.  Do this
>       in a separate scan of the insns because (due to -fcse-follow-jumps)
>       a register can be set below its use.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        loop_depth = bb_loop_depth (bb);
>  
> @@ -3905,7 +3905,7 @@ update_equiv_regs (void)
>  
>    if (!bitmap_empty_p (cleared_regs))
>      {
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  bitmap_and_compl_into (DF_LR_IN (bb), cleared_regs);
>  	  bitmap_and_compl_into (DF_LR_OUT (bb), cleared_regs);
> @@ -4532,7 +4532,7 @@ find_moveable_pseudos (void)
>    bitmap_initialize (&used, 0);
>    bitmap_initialize (&set, 0);
>    bitmap_initialize (&unusable_as_input, 0);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        bitmap transp = bb_transp_live + bb->index;
> @@ -4595,7 +4595,7 @@ find_moveable_pseudos (void)
>    bitmap_clear (&used);
>    bitmap_clear (&set);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bitmap local = bb_local + bb->index;
>        rtx insn;
> @@ -4824,7 +4824,7 @@ find_moveable_pseudos (void)
>  	}
>      }
>    
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bitmap_clear (bb_local + bb->index);
>        bitmap_clear (bb_transp_live + bb->index);
> @@ -4921,7 +4921,7 @@ split_live_ranges_for_shrink_wrap (void)
>    bitmap_initialize (&reachable, 0);
>    queue.create (n_basic_blocks_for_fn (cfun));
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        if (CALL_P (insn) && !SIBLING_CALL_P (insn))
>  	{
> @@ -5145,7 +5145,7 @@ allocate_initial_values (void)
>  		     fixed regs are accepted.  */
>  		  SET_REGNO (preg, new_regno);
>  		  /* Update global register liveness information.  */
> -		  FOR_EACH_BB (bb)
> +		  FOR_EACH_BB_FN (bb, cfun)
>  		    {
>  		      if (REGNO_REG_SET_P (df_get_live_in (bb), regno))
>  			SET_REGNO_REG_SET (df_get_live_in (bb), new_regno);
> diff --git a/gcc/jump.c b/gcc/jump.c
> index a27aaa9..5eefeef 100644
> --- a/gcc/jump.c
> +++ b/gcc/jump.c
> @@ -275,7 +275,7 @@ mark_all_labels (rtx f)
>    if (current_ir_type () == IR_RTL_CFGLAYOUT)
>      {
>        basic_block bb;
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  /* In cfglayout mode, we don't bother with trivial next-insn
>  	     propagation of LABEL_REFs into JUMP_LABEL.  This will be
> diff --git a/gcc/lcm.c b/gcc/lcm.c
> index 1129d6c..0b528d9 100644
> --- a/gcc/lcm.c
> +++ b/gcc/lcm.c
> @@ -281,7 +281,7 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
>  
>    /* Add all the blocks to the worklist.  This prevents an early exit from
>       the loop given our optimistic initialization of LATER above.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        *qin++ = bb;
>        bb->aux = bb;
> @@ -350,7 +350,7 @@ compute_insert_delete (struct edge_list *edge_list, sbitmap *antloc,
>    int x;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bitmap_and_compl (del[bb->index], antloc[bb->index],
>  			laterin[bb->index]);
>  
> @@ -497,7 +497,7 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
>  
>    /* Put every block on the worklist; this is necessary because of the
>       optimistic initialization of AVOUT above.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        *qin++ = bb;
>        bb->aux = bb;
> @@ -638,7 +638,7 @@ compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
>  
>    /* Add all the blocks to the worklist.  This prevents an early exit
>       from the loop given our optimistic initialization of NEARER.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        *tos++ = bb;
>        bb->aux = bb;
> @@ -695,7 +695,7 @@ compute_rev_insert_delete (struct edge_list *edge_list, sbitmap *st_avloc,
>    int x;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bitmap_and_compl (del[bb->index], st_avloc[bb->index],
>  			nearerout[bb->index]);
>  
> diff --git a/gcc/loop-init.c b/gcc/loop-init.c
> index 664b1ac..3dc6953 100644
> --- a/gcc/loop-init.c
> +++ b/gcc/loop-init.c
> @@ -213,7 +213,7 @@ fix_loop_structure (bitmap changed_bbs)
>    /* Remember the depth of the blocks in the loop hierarchy, so that we can
>       recognize blocks whose loop nesting relationship has changed.  */
>    if (changed_bbs)
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        bb->aux = (void *) (size_t) loop_depth (bb->loop_father);
>  
>    /* Remove the dead loops from structures.  We start from the innermost
> @@ -256,7 +256,7 @@ fix_loop_structure (bitmap changed_bbs)
>    /* Mark the blocks whose loop has changed.  */
>    if (changed_bbs)
>      {
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  if ((void *) (size_t) loop_depth (bb->loop_father) != bb->aux)
>  	    bitmap_set_bit (changed_bbs, bb->index);
> diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c
> index 9f1fc07..f47bd50 100644
> --- a/gcc/loop-invariant.c
> +++ b/gcc/loop-invariant.c
> @@ -1825,7 +1825,7 @@ calculate_loop_reg_pressure (void)
>        }
>    ira_setup_eliminable_regset ();
>    bitmap_initialize (&curr_regs_live, &reg_obstack);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        curr_loop = bb->loop_father;
>        if (curr_loop == current_loops->tree_root)
> diff --git a/gcc/lower-subreg.c b/gcc/lower-subreg.c
> index 60c47b9..0b0e397 100644
> --- a/gcc/lower-subreg.c
> +++ b/gcc/lower-subreg.c
> @@ -1463,7 +1463,7 @@ decompose_multiword_subregs (bool decompose_copies)
>    memset (reg_copy_graph.address (), 0, sizeof (bitmap) * max);
>  
>    speed_p = optimize_function_for_speed_p (cfun);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>  
> @@ -1543,7 +1543,7 @@ decompose_multiword_subregs (bool decompose_copies)
>        EXECUTE_IF_SET_IN_BITMAP (decomposable_context, 0, regno, iter)
>  	decompose_register (regno);
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  rtx insn;
>  
> diff --git a/gcc/lra-assigns.c b/gcc/lra-assigns.c
> index 88fc693..41ee286 100644
> --- a/gcc/lra-assigns.c
> +++ b/gcc/lra-assigns.c
> @@ -1302,7 +1302,7 @@ assign_by_spills (void)
>  
>        /* FIXME: Look up the changed insns in the cached LRA insn data using
>  	 an EXECUTE_IF_SET_IN_BITMAP over changed_insns.  */
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	FOR_BB_INSNS (bb, insn)
>  	if (bitmap_bit_p (&changed_insns, INSN_UID (insn)))
>  	  {
> diff --git a/gcc/lra-coalesce.c b/gcc/lra-coalesce.c
> index 859e02f..94a21f0 100644
> --- a/gcc/lra-coalesce.c
> +++ b/gcc/lra-coalesce.c
> @@ -239,7 +239,7 @@ lra_coalesce (void)
>    mv_num = 0;
>    /* Collect moves.  */
>    coalesced_moves = 0;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_BB_INSNS_SAFE (bb, insn, next)
>  	if (INSN_P (insn)
> @@ -297,7 +297,7 @@ lra_coalesce (void)
>  	}
>      }
>    bitmap_initialize (&used_pseudos_bitmap, &reg_obstack);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        update_live_info (df_get_live_in (bb));
>        update_live_info (df_get_live_out (bb));
> diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
> index bb5242a..f04166c 100644
> --- a/gcc/lra-constraints.c
> +++ b/gcc/lra-constraints.c
> @@ -5300,7 +5300,7 @@ lra_inheritance (void)
>    bitmap_initialize (&live_regs, &reg_obstack);
>    bitmap_initialize (&temp_bitmap, &reg_obstack);
>    bitmap_initialize (&ebb_global_regs, &reg_obstack);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        start_bb = bb;
>        if (lra_dump_file != NULL)
> @@ -5401,7 +5401,7 @@ remove_inheritance_pseudos (bitmap remove_pseudos)
>       because we need to marks insns affected by previous
>       inheritance/split pass for processing by the subsequent
>       constraint pass.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        fix_bb_live_info (df_get_live_in (bb), remove_pseudos);
>        fix_bb_live_info (df_get_live_out (bb), remove_pseudos);
> diff --git a/gcc/lra-eliminations.c b/gcc/lra-eliminations.c
> index 915e3a0..6c52bb3 100644
> --- a/gcc/lra-eliminations.c
> +++ b/gcc/lra-eliminations.c
> @@ -1284,7 +1284,7 @@ init_elimination (void)
>    struct elim_table *ep;
>  
>    init_elim_table ();
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        curr_sp_change = 0;
>        stop_to_sp_elimination_p = false;
> diff --git a/gcc/lra-spills.c b/gcc/lra-spills.c
> index 6bebb92..1e5f52b 100644
> --- a/gcc/lra-spills.c
> +++ b/gcc/lra-spills.c
> @@ -280,7 +280,7 @@ assign_spill_hard_regs (int *pseudo_regnos, int n)
>  	  add_to_hard_reg_set (&reserved_hard_regs[p],
>  			       lra_reg_info[i].biggest_mode, hard_regno);
>    bitmap_initialize (&ok_insn_bitmap, &reg_obstack);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        if (DEBUG_INSN_P (insn)
>  	  || ((set = single_set (insn)) != NULL_RTX
> @@ -478,7 +478,7 @@ spill_pseudos (void)
>  	  bitmap_ior_into (&changed_insns, &lra_reg_info[i].insn_bitmap);
>  	}
>      }
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_BB_INSNS (bb, insn)
>  	if (bitmap_bit_p (&changed_insns, INSN_UID (insn)))
> @@ -686,7 +686,7 @@ lra_final_code_change (void)
>      if (lra_reg_info[i].nrefs != 0
>  	&& (hard_regno = lra_get_regno_hard_regno (i)) >= 0)
>        SET_REGNO (regno_reg_rtx[i], hard_regno);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS_SAFE (bb, insn, curr)
>        if (INSN_P (insn))
>  	{
> diff --git a/gcc/lra.c b/gcc/lra.c
> index 50a0786..21b8af1 100644
> --- a/gcc/lra.c
> +++ b/gcc/lra.c
> @@ -1960,7 +1960,7 @@ remove_scratches (void)
>    scratches.create (get_max_uid ());
>    bitmap_initialize (&scratch_bitmap, &reg_obstack);
>    bitmap_initialize (&scratch_operand_bitmap, &reg_obstack);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>      if (INSN_P (insn))
>        {
> @@ -2049,7 +2049,7 @@ check_rtl (bool final_p)
>    rtx insn;
>  
>    lra_assert (! final_p || reload_completed);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>      if (NONDEBUG_INSN_P (insn)
>  	&& GET_CODE (PATTERN (insn)) != USE
> @@ -2090,7 +2090,7 @@ has_nonexceptional_receiver (void)
>    /* First determine which blocks can reach exit via normal paths.  */
>    tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bb->flags &= ~BB_REACHABLE;
>  
>    /* Place the exit block on our worklist.  */
> @@ -2165,7 +2165,7 @@ update_inc_notes (void)
>    basic_block bb;
>    rtx insn;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>      if (NONDEBUG_INSN_P (insn))
>        {
> diff --git a/gcc/mcf.c b/gcc/mcf.c
> index e709f2a..f9b5505 100644
> --- a/gcc/mcf.c
> +++ b/gcc/mcf.c
> @@ -1281,7 +1281,7 @@ adjust_cfg_counts (fixup_graph_type *fixup_graph)
>      {
>        fprintf (dump_file, "\nCheck %s() CFG flow conservation:\n",
>  	       current_function_name ());
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>          {
>            if ((bb->count != sum_edge_counts (bb->preds))
>                 || (bb->count != sum_edge_counts (bb->succs)))
> diff --git a/gcc/mode-switching.c b/gcc/mode-switching.c
> index a9e5069..4e31d68 100644
> --- a/gcc/mode-switching.c
> +++ b/gcc/mode-switching.c
> @@ -516,7 +516,7 @@ optimize_mode_switching (void)
>        /* Determine what the first use (if any) need for a mode of entity E is.
>  	 This will be the mode that is anticipatable for this block.
>  	 Also compute the initial transparency settings.  */
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  struct seginfo *ptr;
>  	  int last_mode = no_mode;
> @@ -624,7 +624,7 @@ optimize_mode_switching (void)
>  	  int m = current_mode[j] = MODE_PRIORITY_TO_MODE (entity_map[j], i);
>  	  struct bb_info *info = bb_info[j];
>  
> -	  FOR_EACH_BB (bb)
> +	  FOR_EACH_BB_FN (bb, cfun)
>  	    {
>  	      if (info[bb->index].seginfo->mode == m)
>  		bitmap_set_bit (antic[bb->index], j);
> @@ -637,7 +637,7 @@ optimize_mode_switching (void)
>        /* Calculate the optimal locations for the
>  	 placement mode switches to modes with priority I.  */
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	bitmap_not (kill[bb->index], transp[bb->index]);
>        edge_list = pre_edge_lcm (n_entities, transp, comp, antic,
>  				kill, &insert, &del);
> diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
> index f313044..ba8d020 100644
> --- a/gcc/modulo-sched.c
> +++ b/gcc/modulo-sched.c
> @@ -3343,7 +3343,7 @@ rest_of_handle_sms (void)
>    max_regno = max_reg_num ();
>  
>    /* Finalize layout changes.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
>        bb->aux = bb->next_bb;
>    free_dominance_info (CDI_DOMINATORS);
> diff --git a/gcc/omp-low.c b/gcc/omp-low.c
> index c929157..05fca40 100644
> --- a/gcc/omp-low.c
> +++ b/gcc/omp-low.c
> @@ -4545,7 +4545,7 @@ optimize_omp_library_calls (gimple entry_stmt)
>  		      && find_omp_clause (gimple_omp_task_clauses (entry_stmt),
>  					  OMP_CLAUSE_UNTIED) != NULL);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        {
>  	gimple call = gsi_stmt (gsi);
> @@ -4849,7 +4849,7 @@ expand_omp_taskreg (struct omp_region *region)
>  	  basic_block bb;
>  	  bool changed = false;
>  
> -	  FOR_EACH_BB (bb)
> +	  FOR_EACH_BB_FN (bb, cfun)
>  	    changed |= gimple_purge_dead_eh_edges (bb);
>  	  if (changed)
>  	    cleanup_tree_cfg ();
> @@ -7939,7 +7939,7 @@ expand_omp_target (struct omp_region *region)
>  	  basic_block bb;
>  	  bool changed = false;
>  
> -	  FOR_EACH_BB (bb)
> +	  FOR_EACH_BB_FN (bb, cfun)
>  	    changed |= gimple_purge_dead_eh_edges (bb);
>  	  if (changed)
>  	    cleanup_tree_cfg ();
> diff --git a/gcc/postreload-gcse.c b/gcc/postreload-gcse.c
> index 9ce17e5..a1204f9 100644
> --- a/gcc/postreload-gcse.c
> +++ b/gcc/postreload-gcse.c
> @@ -266,7 +266,7 @@ alloc_mem (void)
>    /* Find the largest UID and create a mapping from UIDs to CUIDs.  */
>    uid_cuid = XCNEWVEC (int, get_max_uid () + 1);
>    i = 1;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        {
>          if (INSN_P (insn))
> @@ -828,7 +828,7 @@ compute_hash_table (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>  
> diff --git a/gcc/postreload.c b/gcc/postreload.c
> index b0c6342..bfa5a38 100644
> --- a/gcc/postreload.c
> +++ b/gcc/postreload.c
> @@ -213,7 +213,7 @@ reload_cse_regs_1 (void)
>    cselib_init (CSELIB_RECORD_MEMORY);
>    init_alias_analysis ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        {
>  	if (INSN_P (insn))
> diff --git a/gcc/predict.c b/gcc/predict.c
> index 6bb1b2c..78efb72 100644
> --- a/gcc/predict.c
> +++ b/gcc/predict.c
> @@ -1955,7 +1955,7 @@ strip_predict_hints (void)
>    gimple ass_stmt;
>    tree var;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator bi;
>        for (bi = gsi_start_bb (bb); !gsi_end_p (bi);)
> @@ -2226,7 +2226,7 @@ tree_bb_level_predictions (void)
>  
>    apply_return_prediction ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> @@ -2400,10 +2400,10 @@ tree_estimate_probability (void)
>    if (number_of_loops (cfun) > 1)
>      predict_loops ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      tree_estimate_probability_bb (bb);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      combine_predictions_for_bb (bb);
>  
>  #ifdef ENABLE_CHECKING
> @@ -2928,7 +2928,7 @@ expensive_function_p (int threshold)
>  
>    /* Maximally BB_FREQ_MAX^2 so overflow won't happen.  */
>    limit = ENTRY_BLOCK_PTR_FOR_FN (cfun)->frequency * threshold;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>  
> @@ -2997,7 +2997,7 @@ estimate_bb_frequencies (bool force)
>        estimate_loops ();
>  
>        memcpy (&freq_max, &real_zero, sizeof (real_zero));
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	if (sreal_compare (&freq_max, &BLOCK_INFO (bb)->frequency) < 0)
>  	  memcpy (&freq_max, &BLOCK_INFO (bb)->frequency, sizeof (freq_max));
>  
> @@ -3055,7 +3055,7 @@ compute_function_frequency (void)
>       functions to unlikely and that is most of what we care about.  */
>    if (!cfun->after_inlining)
>      node->frequency = NODE_FREQUENCY_UNLIKELY_EXECUTED;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (maybe_hot_bb_p (cfun, bb))
>  	{
> diff --git a/gcc/profile.c b/gcc/profile.c
> index 24c16aa..62b126c 100644
> --- a/gcc/profile.c
> +++ b/gcc/profile.c
> @@ -354,7 +354,7 @@ is_inconsistent (void)
>  {
>    basic_block bb;
>    bool inconsistent = false;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        inconsistent |= is_edge_inconsistent (bb->preds);
>        if (!dump_file && inconsistent)
> @@ -692,7 +692,7 @@ compute_branch_probabilities (unsigned cfg_checksum, unsigned lineno_checksum)
>  
>    /* If the graph has been correctly solved, every block will have a
>       succ and pred count of zero.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gcc_assert (!BB_INFO (bb)->succ_count && !BB_INFO (bb)->pred_count);
>      }
> @@ -1021,7 +1021,7 @@ branch_prob (void)
>       We also add fake exit edges for each call and asm statement in the
>       basic, since it may not return.  */
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        int need_exit_edge = 0, need_entry_edge = 0;
>        int have_exit_edge = 0, have_entry_edge = 0;
> @@ -1260,7 +1260,7 @@ branch_prob (void)
>        /* Initialize the output.  */
>        output_location (NULL, 0, NULL, NULL);
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  gimple_stmt_iterator gsi;
>  	  gcov_position_t offset = 0;
> diff --git a/gcc/ree.c b/gcc/ree.c
> index 87427fd..9938e98 100644
> --- a/gcc/ree.c
> +++ b/gcc/ree.c
> @@ -835,7 +835,7 @@ find_removable_extensions (void)
>    rtx insn, set;
>    unsigned *def_map = XCNEWVEC (unsigned, max_insn_uid);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        {
>  	if (!NONDEBUG_INSN_P (insn))
> diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c
> index 6aad466..87b9821 100644
> --- a/gcc/reg-stack.c
> +++ b/gcc/reg-stack.c
> @@ -2846,7 +2846,7 @@ compensate_edges (void)
>  
>    starting_stack_p = false;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb != ENTRY_BLOCK_PTR_FOR_FN (cfun))
>        {
>          edge e;
> @@ -3153,7 +3153,7 @@ convert_regs (void)
>  
>    /* ??? Process all unreachable blocks.  Though there's no excuse
>       for keeping these even when not optimizing.  */
> -  FOR_EACH_BB (b)
> +  FOR_EACH_BB_FN (b, cfun)
>      {
>        block_info bi = BLOCK_INFO (b);
>  
> @@ -3212,7 +3212,7 @@ reg_to_stack (void)
>  
>    /* Set up block info for each basic block.  */
>    alloc_aux_for_blocks (sizeof (struct block_info_def));
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        block_info bi = BLOCK_INFO (bb);
>        edge_iterator ei;
> diff --git a/gcc/regcprop.c b/gcc/regcprop.c
> index 0438875..3c9ef3d 100644
> --- a/gcc/regcprop.c
> +++ b/gcc/regcprop.c
> @@ -1076,7 +1076,7 @@ copyprop_hardreg_forward (void)
>        = create_alloc_pool ("debug insn changes pool",
>  			   sizeof (struct queued_debug_insn_change), 256);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bitmap_set_bit (visited, bb->index);
>  
> @@ -1112,7 +1112,7 @@ copyprop_hardreg_forward (void)
>  
>    if (MAY_HAVE_DEBUG_INSNS)
>      {
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	if (bitmap_bit_p (visited, bb->index)
>  	    && all_vd[bb->index].n_debug_insn_changes)
>  	  {
> diff --git a/gcc/reginfo.c b/gcc/reginfo.c
> index db66a09..46288eb 100644
> --- a/gcc/reginfo.c
> +++ b/gcc/reginfo.c
> @@ -1266,7 +1266,7 @@ init_subregs_of_mode (void)
>    bitmap_obstack_initialize (&srom_obstack);
>    subregs_of_mode = BITMAP_ALLOC (&srom_obstack);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      FOR_BB_INSNS (bb, insn)
>        if (NONDEBUG_INSN_P (insn))
>          find_subregs_of_mode (PATTERN (insn), subregs_of_mode);
> diff --git a/gcc/regrename.c b/gcc/regrename.c
> index 3c242fb..9ff94d0 100644
> --- a/gcc/regrename.c
> +++ b/gcc/regrename.c
> @@ -674,7 +674,7 @@ regrename_analyze (bitmap bb_mask)
>    /* Gather some information about the blocks in this function.  */
>    rename_info = XCNEWVEC (struct bb_rename_info, n_basic_blocks_for_fn (cfun));
>    i = 0;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        struct bb_rename_info *ri = rename_info + i;
>        ri->bb = bb;
> @@ -778,7 +778,7 @@ regrename_analyze (bitmap bb_mask)
>       We perform the analysis for both incoming and outgoing edges, but we
>       only need to merge once (in the second part, after verifying outgoing
>       edges).  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        struct bb_rename_info *bb_ri = (struct bb_rename_info *) bb->aux;
>        unsigned j;
> @@ -843,7 +843,7 @@ regrename_analyze (bitmap bb_mask)
>  	    }
>  	}
>      }
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        struct bb_rename_info *bb_ri = (struct bb_rename_info *) bb->aux;
>        unsigned j;
> @@ -920,7 +920,7 @@ regrename_analyze (bitmap bb_mask)
>  
>    free (rename_info);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bb->aux = NULL;
>  }
>  
> diff --git a/gcc/regstat.c b/gcc/regstat.c
> index 48d27c3..6a191d8 100644
> --- a/gcc/regstat.c
> +++ b/gcc/regstat.c
> @@ -375,7 +375,7 @@ regstat_compute_ri (void)
>    reg_info_p = XCNEWVEC (struct reg_info_t, max_regno);
>    local_live_last_luid = XNEWVEC (int, max_regno);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        regstat_bb_compute_ri (bb->index, live, artificial_uses,
>  			     local_live, local_processed,
> @@ -522,7 +522,7 @@ regstat_compute_calls_crossed (void)
>    reg_info_p_size = max_regno;
>    reg_info_p = XCNEWVEC (struct reg_info_t, max_regno);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        regstat_bb_compute_calls_crossed (bb->index, live);
>      }
> diff --git a/gcc/reload1.c b/gcc/reload1.c
> index 15c6db5..47439ce 100644
> --- a/gcc/reload1.c
> +++ b/gcc/reload1.c
> @@ -613,7 +613,7 @@ has_nonexceptional_receiver (void)
>    /* First determine which blocks can reach exit via normal paths.  */
>    tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bb->flags &= ~BB_REACHABLE;
>  
>    /* Place the exit block on our worklist.  */
> @@ -641,7 +641,7 @@ has_nonexceptional_receiver (void)
>  
>    /* Now see if there's a reachable block with an exceptional incoming
>       edge.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb->flags & BB_REACHABLE && bb_has_abnormal_pred (bb))
>        return true;
>  
> @@ -1048,7 +1048,7 @@ reload (rtx first, int global)
>       pseudo.  */
>  
>    if (! frame_pointer_needed)
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        bitmap_clear_bit (df_get_live_in (bb), HARD_FRAME_POINTER_REGNUM);
>  
>    /* Come here (with failure set nonzero) if we can't get enough spill
> @@ -1592,7 +1592,7 @@ calculate_elim_costs_all_insns (void)
>    set_initial_elim_offsets ();
>    set_initial_label_offsets ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        elim_bb = bb;
> diff --git a/gcc/resource.c b/gcc/resource.c
> index 861d969..442c852 100644
> --- a/gcc/resource.c
> +++ b/gcc/resource.c
> @@ -1219,7 +1219,7 @@ init_resource_info (rtx epilogue_insn)
>    bb_ticks = XCNEWVEC (int, last_basic_block_for_fn (cfun));
>  
>    /* Set the BLOCK_FOR_INSN of each label that starts a basic block.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (LABEL_P (BB_HEAD (bb)))
>        BLOCK_FOR_INSN (BB_HEAD (bb)) = bb;
>  }
> @@ -1258,7 +1258,7 @@ free_resource_info (void)
>        bb_ticks = NULL;
>      }
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (LABEL_P (BB_HEAD (bb)))
>        BLOCK_FOR_INSN (BB_HEAD (bb)) = NULL;
>  }
> diff --git a/gcc/sched-ebb.c b/gcc/sched-ebb.c
> index 73af0a7..d4baec5 100644
> --- a/gcc/sched-ebb.c
> +++ b/gcc/sched-ebb.c
> @@ -637,7 +637,7 @@ schedule_ebbs (void)
>    schedule_ebbs_init ();
>  
>    /* Schedule every region in the subroutine.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx head = BB_HEAD (bb);
>  
> diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
> index a85ee5b..7fa9759 100644
> --- a/gcc/sched-rgn.c
> +++ b/gcc/sched-rgn.c
> @@ -272,7 +272,7 @@ is_cfg_nonregular (void)
>  
>    /* If we have insns which refer to labels as non-jumped-to operands,
>       then we consider the cfg not well structured.  */
> -  FOR_EACH_BB (b)
> +  FOR_EACH_BB_FN (b, cfun)
>      FOR_BB_INSNS (b, insn)
>        {
>  	rtx note, next, set, dest;
> @@ -317,7 +317,7 @@ is_cfg_nonregular (void)
>       Unreachable loops with a single block are detected here.  This
>       test is redundant with the one in find_rgns, but it's much
>       cheaper to go ahead and catch the trivial case here.  */
> -  FOR_EACH_BB (b)
> +  FOR_EACH_BB_FN (b, cfun)
>      {
>        if (EDGE_COUNT (b->preds) == 0
>  	  || (single_pred_p (b)
> @@ -479,7 +479,7 @@ find_single_block_region (bool ebbs_p)
>        probability_cutoff = PARAM_VALUE (TRACER_MIN_BRANCH_PROBABILITY);
>      probability_cutoff = REG_BR_PROB_BASE / 100 * probability_cutoff;
>  
> -    FOR_EACH_BB (ebb_start)
> +    FOR_EACH_BB_FN (ebb_start, cfun)
>        {
>          RGN_NR_BLOCKS (nr_regions) = 0;
>          RGN_BLOCKS (nr_regions) = i;
> @@ -512,7 +512,7 @@ find_single_block_region (bool ebbs_p)
>        }
>    }
>    else
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        {
>          rgn_bb_table[nr_regions] = bb->index;
>          RGN_NR_BLOCKS (nr_regions) = 1;
> @@ -762,7 +762,7 @@ haifa_find_rgns (void)
>       the entry node by placing a nonzero value in dfs_nr.  Thus if
>       dfs_nr is zero for any block, then it must be unreachable.  */
>    unreachable = 0;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (dfs_nr[bb->index] == 0)
>        {
>  	unreachable = 1;
> @@ -773,7 +773,7 @@ haifa_find_rgns (void)
>       to hold degree counts.  */
>    degree = dfs_nr;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      degree[bb->index] = EDGE_COUNT (bb->preds);
>  
>    /* Do not perform region scheduling if there are any unreachable
> @@ -807,7 +807,7 @@ haifa_find_rgns (void)
>  
>        /* Find blocks which are inner loop headers.  We still have non-reducible
>  	 loops to consider at this point.  */
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  if (bitmap_bit_p (header, bb->index) && bitmap_bit_p (inner, bb->index))
>  	    {
> @@ -826,7 +826,7 @@ haifa_find_rgns (void)
>  		 If there exists a block that is not dominated by the loop
>  		 header, then the block is reachable from outside the loop
>  		 and thus the loop is not a natural loop.  */
> -	      FOR_EACH_BB (jbb)
> +	      FOR_EACH_BB_FN (jbb, cfun)
>  		{
>  		  /* First identify blocks in the loop, except for the loop
>  		     entry block.  */
> @@ -874,7 +874,7 @@ haifa_find_rgns (void)
>  		 Place those blocks into the queue.  */
>  	      if (no_loops)
>  		{
> -		  FOR_EACH_BB (jbb)
> +		  FOR_EACH_BB_FN (jbb, cfun)
>  		    /* Leaf nodes have only a single successor which must
>  		       be EXIT_BLOCK.  */
>  		    if (single_succ_p (jbb)
> @@ -1052,7 +1052,7 @@ haifa_find_rgns (void)
>  
>    /* Any block that did not end up in a region is placed into a region
>       by itself.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (degree[bb->index] >= 0)
>        {
>  	rgn_bb_table[idx] = bb->index;
> @@ -3281,7 +3281,7 @@ sched_rgn_local_init (int rgn)
>  
>        /* Use ->aux to implement EDGE_TO_BIT mapping.  */
>        rgn_nr_edges = 0;
> -      FOR_EACH_BB (block)
> +      FOR_EACH_BB_FN (block, cfun)
>  	{
>  	  if (CONTAINING_RGN (block->index) != rgn)
>  	    continue;
> @@ -3291,7 +3291,7 @@ sched_rgn_local_init (int rgn)
>  
>        rgn_edges = XNEWVEC (edge, rgn_nr_edges);
>        rgn_nr_edges = 0;
> -      FOR_EACH_BB (block)
> +      FOR_EACH_BB_FN (block, cfun)
>  	{
>  	  if (CONTAINING_RGN (block->index) != rgn)
>  	    continue;
> @@ -3312,7 +3312,7 @@ sched_rgn_local_init (int rgn)
>        /* Cleanup ->aux used for EDGE_TO_BIT mapping.  */
>        /* We don't need them anymore.  But we want to avoid duplication of
>  	 aux fields in the newly created edges.  */
> -      FOR_EACH_BB (block)
> +      FOR_EACH_BB_FN (block, cfun)
>  	{
>  	  if (CONTAINING_RGN (block->index) != rgn)
>  	    continue;
> diff --git a/gcc/sel-sched-dump.c b/gcc/sel-sched-dump.c
> index 347b5eb..2e46770 100644
> --- a/gcc/sel-sched-dump.c
> +++ b/gcc/sel-sched-dump.c
> @@ -750,7 +750,7 @@ sel_dump_cfg_2 (FILE *f, int flags)
>    if (flags & SEL_DUMP_CFG_FUNCTION_NAME)
>      fprintf (f, "function [label = \"%s\"];\n", current_function_name ());
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        insn_t insn = BB_HEAD (bb);
>        insn_t next_tail = NEXT_INSN (BB_END (bb));
> diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
> index f7cc9ec..942d909 100644
> --- a/gcc/sel-sched-ir.c
> +++ b/gcc/sel-sched-ir.c
> @@ -4321,7 +4321,7 @@ init_lv_sets (void)
>    basic_block bb;
>  
>    /* Initialize of LV sets.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      init_lv_set (bb);
>  
>    /* Don't forget EXIT_BLOCK.  */
> @@ -4349,7 +4349,7 @@ free_lv_sets (void)
>    free_lv_set (EXIT_BLOCK_PTR_FOR_FN (cfun));
>  
>    /* Free LV sets.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (BB_LV_SET (bb))
>        free_lv_set (bb);
>  }
> @@ -6155,7 +6155,7 @@ make_regions_from_the_rest (void)
>    for (i = 0; i < last_basic_block_for_fn (cfun); i++)
>      loop_hdr[i] = -1;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (bb->loop_father && !bb->loop_father->num == 0
>  	  && !(bb->flags & BB_IRREDUCIBLE_LOOP))
> @@ -6165,7 +6165,7 @@ make_regions_from_the_rest (void)
>    /* For each basic block degree is calculated as the number of incoming
>       edges, that are going out of bbs that are not yet scheduled.
>       The basic blocks that are scheduled have degree value of zero.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        degree[bb->index] = 0;
>  
> @@ -6183,7 +6183,7 @@ make_regions_from_the_rest (void)
>  
>    /* Any block that did not end up in a region is placed into a region
>       by itself.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (degree[bb->index] >= 0)
>        {
>  	rgn_bb_table[cur_rgn_blocks] = bb->index;
> diff --git a/gcc/sese.c b/gcc/sese.c
> index 7e59ac8..5e47ef7 100644
> --- a/gcc/sese.c
> +++ b/gcc/sese.c
> @@ -156,7 +156,7 @@ build_sese_loop_nests (sese region)
>    basic_block bb;
>    struct loop *loop0, *loop1;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb_in_sese_p (bb, region))
>        {
>  	struct loop *loop = bb->loop_father;
> @@ -303,10 +303,10 @@ sese_build_liveouts (sese region, bitmap liveouts)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      sese_build_liveouts_bb (region, liveouts, bb);
>    if (MAY_HAVE_DEBUG_STMTS)
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        sese_reset_debug_liveouts_bb (region, liveouts, bb);
>  }
>  
> diff --git a/gcc/stack-ptr-mod.c b/gcc/stack-ptr-mod.c
> index 68ccd16..acca801 100644
> --- a/gcc/stack-ptr-mod.c
> +++ b/gcc/stack-ptr-mod.c
> @@ -58,7 +58,7 @@ notice_stack_pointer_modification (void)
>       been used.  */
>    crtl->sp_is_unchanging = !cfun->calls_alloca;
>    if (crtl->sp_is_unchanging)
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        FOR_BB_INSNS (bb, insn)
>          {
>  	  if (INSN_P (insn))
> diff --git a/gcc/store-motion.c b/gcc/store-motion.c
> index 808b0a7..57c991a 100644
> --- a/gcc/store-motion.c
> +++ b/gcc/store-motion.c
> @@ -656,7 +656,7 @@ compute_store_table (void)
>    already_set = XNEWVEC (int, max_gcse_regno);
>  
>    /* Find all the stores we care about.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* First compute the registers set in this block.  */
>        FOR_BB_INSNS (bb, insn)
> @@ -1061,7 +1061,7 @@ build_store_vectors (void)
>    bitmap_vector_clear (st_transp, last_basic_block_for_fn (cfun));
>    regs_set_in_block = XNEWVEC (int, max_gcse_regno);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        memset (regs_set_in_block, 0, sizeof (int) * max_gcse_regno);
>  
> @@ -1188,7 +1188,7 @@ one_store_motion_pass (void)
>  
>        /* Now we want to insert the new stores which are going to be needed.  */
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	if (bitmap_bit_p (st_delete_map[bb->index], ptr->index))
>  	  {
>  	    delete_store (ptr, bb);
> diff --git a/gcc/testsuite/g++.dg/plugin/selfassign.c b/gcc/testsuite/g++.dg/plugin/selfassign.c
> index be5a204..041f25d 100644
> --- a/gcc/testsuite/g++.dg/plugin/selfassign.c
> +++ b/gcc/testsuite/g++.dg/plugin/selfassign.c
> @@ -261,7 +261,7 @@ execute_warn_self_assign (void)
>    gimple_stmt_iterator gsi;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>          warn_self_assign (gsi_stmt (gsi));
> diff --git a/gcc/testsuite/gcc.dg/plugin/selfassign.c b/gcc/testsuite/gcc.dg/plugin/selfassign.c
> index be5a204..041f25d 100644
> --- a/gcc/testsuite/gcc.dg/plugin/selfassign.c
> +++ b/gcc/testsuite/gcc.dg/plugin/selfassign.c
> @@ -261,7 +261,7 @@ execute_warn_self_assign (void)
>    gimple_stmt_iterator gsi;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>          warn_self_assign (gsi_stmt (gsi));
> diff --git a/gcc/tracer.c b/gcc/tracer.c
> index de6877a..a40cbeb 100644
> --- a/gcc/tracer.c
> +++ b/gcc/tracer.c
> @@ -256,7 +256,7 @@ tail_duplicate (void)
>    branch_ratio_cutoff =
>      (REG_BR_PROB_BASE / 100 * PARAM_VALUE (TRACER_MIN_BRANCH_RATIO));
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        int n = count_insns (bb);
>        if (!ignore_bb_p (bb))
> diff --git a/gcc/trans-mem.c b/gcc/trans-mem.c
> index 2a6597d..c9af680 100644
> --- a/gcc/trans-mem.c
> +++ b/gcc/trans-mem.c
> @@ -2656,7 +2656,7 @@ compute_transaction_bits (void)
>       certainly don't need it to calculate CDI_DOMINATOR info.  */
>    gate_tm_init ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bb->flags &= ~BB_IN_TRANSACTION;
>  
>    for (region = all_tm_regions; region; region = region->next)
> diff --git a/gcc/tree-call-cdce.c b/gcc/tree-call-cdce.c
> index 19402e3..32d0d5a 100644
> --- a/gcc/tree-call-cdce.c
> +++ b/gcc/tree-call-cdce.c
> @@ -876,7 +876,7 @@ tree_call_cdce (void)
>    gimple_stmt_iterator i;
>    bool something_changed = false;
>    auto_vec<gimple> cond_dead_built_in_calls;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* Collect dead call candidates.  */
>        for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (&i))
> diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
> index ec365b5..98434ac 100644
> --- a/gcc/tree-cfg.c
> +++ b/gcc/tree-cfg.c
> @@ -302,7 +302,7 @@ replace_loop_annotate ()
>      }
>  
>    /* Remove IFN_ANNOTATE. Safeguard for the case loop->latch == NULL.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gsi = gsi_last_bb (bb);
>        stmt = gsi_stmt (gsi);
> @@ -456,7 +456,7 @@ factor_computed_gotos (void)
>       Examine the last statement in each basic block to see if the block
>       ends with a computed goto.  */
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi = gsi_last_bb (bb);
>        gimple last;
> @@ -635,7 +635,7 @@ fold_cond_expr_cond (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple stmt = last_stmt (bb);
>  
> @@ -682,7 +682,7 @@ make_edges (void)
>  	     EDGE_FALLTHRU);
>  
>    /* Traverse the basic block array placing edges.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple last = last_stmt (bb);
>        bool fallthru;
> @@ -836,7 +836,7 @@ assign_discriminators (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge e;
>        edge_iterator ei;
> @@ -1055,7 +1055,7 @@ make_abnormal_goto_edges (basic_block bb, bool for_call)
>    basic_block target_bb;
>    gimple_stmt_iterator gsi;
>  
> -  FOR_EACH_BB (target_bb)
> +  FOR_EACH_BB_FN (target_bb, cfun)
>      {
>        for (gsi = gsi_start_bb (target_bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	{
> @@ -1235,7 +1235,7 @@ cleanup_dead_labels (void)
>  
>    /* Find a suitable label for each block.  We use the first user-defined
>       label if there is one, or otherwise just the first label we see.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>  
> @@ -1271,7 +1271,7 @@ cleanup_dead_labels (void)
>  
>    /* Now redirect all jumps/branches to the selected label.
>       First do so for each block ending in a control statement.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple stmt = last_stmt (bb);
>        tree label, new_label;
> @@ -1363,7 +1363,7 @@ cleanup_dead_labels (void)
>    /* Finally, purge dead labels.  All user-defined labels and labels that
>       can be the target of non-local gotos and labels which have their
>       address taken are preserved.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>        tree label_for_this_bb = label_for_bb[bb->index].label;
> @@ -1487,7 +1487,7 @@ group_case_labels (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple stmt = last_stmt (bb);
>        if (stmt && gimple_code (stmt) == GIMPLE_SWITCH)
> @@ -2160,7 +2160,7 @@ dump_cfg_stats (FILE *file)
>  	   SCALE (size), LABEL (size));
>  
>    num_edges = 0;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      num_edges += EDGE_COUNT (bb->succs);
>    size = num_edges * sizeof (struct edge_def);
>    total += size;
> @@ -4894,7 +4894,7 @@ gimple_verify_flow_info (void)
>  	err = 1;
>        }
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bool found_ctrl_stmt = false;
>  
> @@ -7241,7 +7241,7 @@ print_loop (FILE *file, struct loop *loop, int indent, int verbosity)
>    if (verbosity >= 1)
>      {
>        fprintf (file, "%s{\n", s_indent);
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	if (bb->loop_father == loop)
>  	  print_loops_bb (file, bb, indent, verbosity);
>  
> @@ -8331,7 +8331,7 @@ execute_fixup_cfg (void)
>    FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (cfun)->succs)
>      e->count = apply_scale (e->count, count_scale);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bb->count = apply_scale (bb->count, count_scale);
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c
> index 50b4a68..949b21d 100644
> --- a/gcc/tree-cfgcleanup.c
> +++ b/gcc/tree-cfgcleanup.c
> @@ -640,7 +640,7 @@ cleanup_tree_cfg_1 (void)
>       recording of edge to CASE_LABEL_EXPR.  */
>    start_recording_case_labels ();
>  
> -  /* Start by iterating over all basic blocks.  We cannot use FOR_EACH_BB,
> +  /* Start by iterating over all basic blocks.  We cannot use FOR_EACH_BB_FN,
>       since the basic blocks may get removed.  */
>    n = last_basic_block_for_fn (cfun);
>    for (i = NUM_FIXED_BLOCKS; i < n; i++)
> @@ -918,7 +918,7 @@ merge_phi_nodes (void)
>    calculate_dominance_info (CDI_DOMINATORS);
>  
>    /* Find all PHI nodes that we may be able to merge.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        basic_block dest;
>  
> diff --git a/gcc/tree-complex.c b/gcc/tree-complex.c
> index ff5ccab..8c9a3aa 100644
> --- a/gcc/tree-complex.c
> +++ b/gcc/tree-complex.c
> @@ -207,7 +207,7 @@ init_dont_simulate_again (void)
>    gimple phi;
>    bool saw_a_complex_op = false;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	{
> @@ -1637,7 +1637,7 @@ tree_lower_complex (void)
>  
>    /* ??? Ideally we'd traverse the blocks in breadth-first order.  */
>    old_last_basic_block = last_basic_block_for_fn (cfun);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        if (bb->index >= old_last_basic_block)
>  	continue;
> diff --git a/gcc/tree-dfa.c b/gcc/tree-dfa.c
> index 27d6a71..2d964d5 100644
> --- a/gcc/tree-dfa.c
> +++ b/gcc/tree-dfa.c
> @@ -279,7 +279,7 @@ collect_dfa_stats (struct dfa_stats_d *dfa_stats_p ATTRIBUTE_UNUSED)
>    memset ((void *)dfa_stats_p, 0, sizeof (struct dfa_stats_d));
>  
>    /* Walk all the statements in the function counting references.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator si;
>  
> @@ -741,7 +741,7 @@ dump_enumerated_decls (FILE *file, int flags)
>  
>    memset (&wi, '\0', sizeof (wi));
>    wi.info = (void *) &decl_list;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> diff --git a/gcc/tree-eh.c b/gcc/tree-eh.c
> index 85dc79f..467eb20 100644
> --- a/gcc/tree-eh.c
> +++ b/gcc/tree-eh.c
> @@ -3304,7 +3304,7 @@ execute_lower_resx (void)
>  
>    mnt_map = pointer_map_create ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple last = last_stmt (bb);
>        if (last && is_gimple_resx (last))
> @@ -3710,7 +3710,7 @@ execute_lower_eh_dispatch (void)
>  
>    assign_filter_values ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple last = last_stmt (bb);
>        if (last == NULL)
> @@ -3810,7 +3810,7 @@ mark_reachable_handlers (sbitmap *r_reachablep, sbitmap *lp_reachablep)
>    else
>      lp_reachable = NULL;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> diff --git a/gcc/tree-emutls.c b/gcc/tree-emutls.c
> index 9ba25fc..32599eb 100644
> --- a/gcc/tree-emutls.c
> +++ b/gcc/tree-emutls.c
> @@ -638,7 +638,7 @@ lower_emutls_function_body (struct cgraph_node *node)
>       create a node for it.  */
>    d.builtin_node = cgraph_get_create_node (d.builtin_decl);
>  
> -  FOR_EACH_BB (d.bb)
> +  FOR_EACH_BB_FN (d.bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        unsigned int i, nedge;
> diff --git a/gcc/tree-if-conv.c b/gcc/tree-if-conv.c
> index 7f6a150..71a25f1 100644
> --- a/gcc/tree-if-conv.c
> +++ b/gcc/tree-if-conv.c
> @@ -1815,7 +1815,7 @@ main_tree_if_conversion (void)
>  #ifdef ENABLE_CHECKING
>    {
>      basic_block bb;
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        gcc_assert (!bb->aux);
>    }
>  #endif
> diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
> index ed06cb9..ab8e40b 100644
> --- a/gcc/tree-inline.c
> +++ b/gcc/tree-inline.c
> @@ -4569,7 +4569,7 @@ optimize_inline_calls (tree fn)
>       will split id->current_basic_block, and the new blocks will
>       follow it; we'll trudge through them, processing their CALL_EXPRs
>       along the way.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      inlined_p |= gimple_expand_calls_inline (bb, &id);
>  
>    pop_gimplify_context (NULL);
> diff --git a/gcc/tree-into-ssa.c b/gcc/tree-into-ssa.c
> index b6d3dd7..8e539f2 100644
> --- a/gcc/tree-into-ssa.c
> +++ b/gcc/tree-into-ssa.c
> @@ -2320,7 +2320,7 @@ rewrite_into_ssa (void)
>  
>    /* Initialize dominance frontier.  */
>    dfs = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bitmap_initialize (&dfs[bb->index], &bitmap_default_obstack);
>  
>    /* 1- Compute dominance frontiers.  */
> @@ -2337,7 +2337,7 @@ rewrite_into_ssa (void)
>    rewrite_blocks (ENTRY_BLOCK_PTR_FOR_FN (cfun), REWRITE_ALL);
>  
>    /* Free allocated memory.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bitmap_clear (&dfs[bb->index]);
>    free (dfs);
>  
> @@ -3270,7 +3270,7 @@ update_ssa (unsigned update_flags)
>        /* If the caller requested PHI nodes to be added, compute
>  	 dominance frontiers.  */
>        dfs = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	bitmap_initialize (&dfs[bb->index], &bitmap_default_obstack);
>        compute_dominance_frontiers (dfs);
>  
> @@ -3296,7 +3296,7 @@ update_ssa (unsigned update_flags)
>  	insert_updated_phi_nodes_for (sym, dfs, blocks_to_update,
>  	                              update_flags);
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	bitmap_clear (&dfs[bb->index]);
>        free (dfs);
>  
> diff --git a/gcc/tree-nrv.c b/gcc/tree-nrv.c
> index b42993d..e00463d 100644
> --- a/gcc/tree-nrv.c
> +++ b/gcc/tree-nrv.c
> @@ -144,7 +144,7 @@ tree_nrv (void)
>      return 0;
>  
>    /* Look through each block for assignments to the RESULT_DECL.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	{
> @@ -238,7 +238,7 @@ tree_nrv (void)
>       RESULT.  */
>    data.var = found;
>    data.result = result;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); )
>  	{
> @@ -358,7 +358,7 @@ execute_return_slot_opt (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> diff --git a/gcc/tree-object-size.c b/gcc/tree-object-size.c
> index 6a587e1..c83345f 100644
> --- a/gcc/tree-object-size.c
> +++ b/gcc/tree-object-size.c
> @@ -1211,7 +1211,7 @@ static unsigned int
>  compute_object_sizes (void)
>  {
>    basic_block bb;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>        for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (&i))
> diff --git a/gcc/tree-outof-ssa.c b/gcc/tree-outof-ssa.c
> index 8df3026..c5bba789 100644
> --- a/gcc/tree-outof-ssa.c
> +++ b/gcc/tree-outof-ssa.c
> @@ -835,7 +835,7 @@ eliminate_useless_phis (void)
>    gimple_stmt_iterator gsi;
>    tree result;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); )
>          {
> @@ -893,7 +893,7 @@ rewrite_trees (var_map map ATTRIBUTE_UNUSED)
>    /* Search for PHIs where the destination has no partition, but one
>       or more arguments has a partition.  This should not happen and can
>       create incorrect code.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> @@ -1101,7 +1101,7 @@ insert_backedge_copies (void)
>  
>    mark_dfs_back_edges ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* Mark block as possibly needing calculation of UIDs.  */
>        bb->aux = &bb->aux;
> diff --git a/gcc/tree-profile.c b/gcc/tree-profile.c
> index 537c246..51e997c 100644
> --- a/gcc/tree-profile.c
> +++ b/gcc/tree-profile.c
> @@ -637,7 +637,7 @@ tree_profiling (void)
>  
>        push_cfun (DECL_STRUCT_FUNCTION (node->decl));
>  
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  gimple_stmt_iterator gsi;
>  	  for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> diff --git a/gcc/tree-scalar-evolution.c b/gcc/tree-scalar-evolution.c
> index ada942d..59e44cb 100644
> --- a/gcc/tree-scalar-evolution.c
> +++ b/gcc/tree-scalar-evolution.c
> @@ -3276,7 +3276,7 @@ scev_const_prop (void)
>    if (number_of_loops (cfun) <= 1)
>      return 0;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        loop = bb->loop_father;
>  
> diff --git a/gcc/tree-sra.c b/gcc/tree-sra.c
> index 9aa526f..ebd4218 100644
> --- a/gcc/tree-sra.c
> +++ b/gcc/tree-sra.c
> @@ -1252,7 +1252,7 @@ scan_function (void)
>    basic_block bb;
>    bool ret = false;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> @@ -3311,7 +3311,7 @@ sra_modify_function_body (void)
>    bool cfg_changed = false;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi = gsi_start_bb (bb);
>        while (!gsi_end_p (gsi))
> @@ -3795,7 +3795,7 @@ propagate_dereference_distances (void)
>  
>    auto_vec<basic_block> queue (last_basic_block_for_fn (cfun));
>    queue.quick_push (ENTRY_BLOCK_PTR_FOR_FN (cfun));
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        queue.quick_push (bb);
>        bb->aux = bb;
> @@ -4572,7 +4572,7 @@ ipa_sra_modify_function_body (ipa_parm_adjustment_vec adjustments)
>    bool cfg_changed = false;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> @@ -4811,7 +4811,7 @@ convert_callers (struct cgraph_node *node, tree old_decl,
>    if (!encountered_recursive_call)
>      return;
>  
> -  FOR_EACH_BB (this_block)
> +  FOR_EACH_BB_FN (this_block, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> diff --git a/gcc/tree-ssa-ccp.c b/gcc/tree-ssa-ccp.c
> index 3d05258..7e07771 100644
> --- a/gcc/tree-ssa-ccp.c
> +++ b/gcc/tree-ssa-ccp.c
> @@ -774,7 +774,7 @@ ccp_initialize (void)
>    const_val = XCNEWVEC (prop_value_t, n_const_val);
>  
>    /* Initialize simulation flags for PHI nodes and statements.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>  
> @@ -808,7 +808,7 @@ ccp_initialize (void)
>    /* Now process PHI nodes.  We never clear the simulate_again flag on
>       phi nodes, since we do not know which edges are executable yet,
>       except for phi nodes for virtual operands when we do not do store ccp.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>  
> @@ -2508,7 +2508,7 @@ execute_fold_all_builtins (void)
>    basic_block bb;
>    unsigned int todoflags = 0;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>        for (i = gsi_start_bb (bb); !gsi_end_p (i); )
> diff --git a/gcc/tree-ssa-coalesce.c b/gcc/tree-ssa-coalesce.c
> index 70158d5..38a4078 100644
> --- a/gcc/tree-ssa-coalesce.c
> +++ b/gcc/tree-ssa-coalesce.c
> @@ -821,7 +821,7 @@ build_ssa_conflict_graph (tree_live_info_p liveinfo)
>  
>    live = new_live_track (map);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> @@ -929,7 +929,7 @@ create_outofssa_var_map (coalesce_list_p cl, bitmap used_in_copy)
>  
>    map = init_var_map (num_ssa_names);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        tree arg;
>  
> @@ -1183,7 +1183,7 @@ coalesce_partitions (var_map map, ssa_conflicts_p graph, coalesce_list_p cl,
>       in the coalesce list because they do not need to be sorted, and simply
>       consume extra memory/compilation time in large programs.  */
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_EACH_EDGE (e, ei, bb->preds)
>  	if (e->flags & EDGE_ABNORMAL)
> diff --git a/gcc/tree-ssa-copy.c b/gcc/tree-ssa-copy.c
> index 0dd5e14..3da262b 100644
> --- a/gcc/tree-ssa-copy.c
> +++ b/gcc/tree-ssa-copy.c
> @@ -469,7 +469,7 @@ init_copy_prop (void)
>    n_copy_of = num_ssa_names;
>    copy_of = XCNEWVEC (prop_value_t, n_copy_of);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator si;
>        int depth = bb_loop_depth (bb);
> diff --git a/gcc/tree-ssa-copyrename.c b/gcc/tree-ssa-copyrename.c
> index 90e070f..c7d514f 100644
> --- a/gcc/tree-ssa-copyrename.c
> +++ b/gcc/tree-ssa-copyrename.c
> @@ -325,7 +325,7 @@ rename_ssa_copies (void)
>  
>    map = init_var_map (num_ssa_names);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* Scan for real copies.  */
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> @@ -341,7 +341,7 @@ rename_ssa_copies (void)
>  	}
>      }
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* Treat PHI nodes as copies between the result and each argument.  */
>        for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> diff --git a/gcc/tree-ssa-dce.c b/gcc/tree-ssa-dce.c
> index 701dd44..5abef5c 100644
> --- a/gcc/tree-ssa-dce.c
> +++ b/gcc/tree-ssa-dce.c
> @@ -374,7 +374,7 @@ find_obviously_necessary_stmts (bool aggressive)
>    gimple phi, stmt;
>    int flags;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* PHI nodes are never inherently necessary.  */
>        for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> @@ -404,7 +404,7 @@ find_obviously_necessary_stmts (bool aggressive)
>        struct loop *loop;
>        scev_initialize ();
>        if (mark_irreducible_loops ())
> -	FOR_EACH_BB (bb)
> +	FOR_EACH_BB_FN (bb, cfun)
>  	  {
>  	    edge_iterator ei;
>  	    FOR_EACH_EDGE (e, ei, bb->succs)
> @@ -1325,7 +1325,7 @@ eliminate_unnecessary_stmts (void)
>  	    }
>  	}
>      }
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* Remove dead PHI nodes.  */
>        something_changed |= remove_dead_phis (bb);
> diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c
> index 6cf60be..2bd2a86 100644
> --- a/gcc/tree-ssa-dom.c
> +++ b/gcc/tree-ssa-dom.c
> @@ -795,7 +795,7 @@ free_all_edge_infos (void)
>    edge_iterator ei;
>    edge e;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_EACH_EDGE (e, ei, bb->preds)
>          {
> @@ -866,7 +866,7 @@ tree_ssa_dominator_optimize (void)
>    {
>      gimple_stmt_iterator gsi;
>      basic_block bb;
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        {
>  	for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	  update_stmt_if_modified (gsi_stmt (gsi));
> diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c
> index 6e6d115..a77a639 100644
> --- a/gcc/tree-ssa-forwprop.c
> +++ b/gcc/tree-ssa-forwprop.c
> @@ -3386,7 +3386,7 @@ ssa_forward_propagate_and_combine (void)
>  
>    cfg_changed = false;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> diff --git a/gcc/tree-ssa-live.c b/gcc/tree-ssa-live.c
> index 6ccf2fb..da7198b 100644
> --- a/gcc/tree-ssa-live.c
> +++ b/gcc/tree-ssa-live.c
> @@ -673,7 +673,7 @@ clear_unused_block_pointer (void)
>    basic_block bb;
>    gimple_stmt_iterator gsi;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        {
>  	unsigned i;
> @@ -791,7 +791,7 @@ remove_unused_locals (void)
>    usedvars = BITMAP_ALLOC (NULL);
>  
>    /* Walk the CFG marking all referenced symbols.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        size_t i;
> @@ -856,7 +856,7 @@ remove_unused_locals (void)
>       ignores them, and the second pass (if there were any) tries to remove
>       them.  */
>    if (have_local_clobbers)
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        {
>  	gimple_stmt_iterator gsi;
>  
> @@ -963,11 +963,11 @@ new_tree_live_info (var_map map)
>    live->num_blocks = last_basic_block_for_fn (cfun);
>  
>    live->livein = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bitmap_initialize (&live->livein[bb->index], &liveness_bitmap_obstack);
>  
>    live->liveout = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun));
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bitmap_initialize (&live->liveout[bb->index], &liveness_bitmap_obstack);
>  
>    live->work_stack = XNEWVEC (int, last_basic_block_for_fn (cfun));
> @@ -1149,11 +1149,11 @@ calculate_live_on_exit (tree_live_info_p liveinfo)
>    edge_iterator ei;
>  
>    /* live on entry calculations used liveout vectors for defs, clear them.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      bitmap_clear (&liveinfo->liveout[bb->index]);
>  
>    /* Set all the live-on-exit bits for uses in PHIs.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        size_t i;
> @@ -1294,7 +1294,7 @@ dump_live_info (FILE *f, tree_live_info_p live, int flag)
>  
>    if ((flag & LIVEDUMP_ENTRY) && live->livein)
>      {
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  fprintf (f, "\nLive on entry to BB%d : ", bb->index);
>  	  EXECUTE_IF_SET_IN_BITMAP (&live->livein[bb->index], 0, i, bi)
> @@ -1308,7 +1308,7 @@ dump_live_info (FILE *f, tree_live_info_p live, int flag)
>  
>    if ((flag & LIVEDUMP_EXIT) && live->liveout)
>      {
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	{
>  	  fprintf (f, "\nLive on exit from BB%d : ", bb->index);
>  	  EXECUTE_IF_SET_IN_BITMAP (&live->liveout[bb->index], 0, i, bi)
> diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
> index 3aaf2b2..cbcdc37 100644
> --- a/gcc/tree-ssa-loop-im.c
> +++ b/gcc/tree-ssa-loop-im.c
> @@ -1601,7 +1601,7 @@ analyze_memory_references (void)
>       loops postorder.  */
>    i = 0;
>    bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      if (bb->loop_father != current_loops->tree_root)
>        bbs[i++] = bb;
>    n = i;
> @@ -2406,7 +2406,7 @@ fill_always_executed_in (void)
>    struct loop *loop;
>  
>    bitmap_clear (contains_call);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
> @@ -2478,7 +2478,7 @@ tree_ssa_lim_finalize (void)
>  
>    free_aux_for_edges ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      SET_ALWAYS_EXECUTED_IN (bb, NULL);
>  
>    bitmap_obstack_release (&lim_bitmap_obstack);
> diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
> index 76d5958..ed30c7b0 100644
> --- a/gcc/tree-ssa-loop-manip.c
> +++ b/gcc/tree-ssa-loop-manip.c
> @@ -463,7 +463,7 @@ find_uses_to_rename (bitmap changed_bbs, bitmap *use_blocks, bitmap need_phis)
>      EXECUTE_IF_SET_IN_BITMAP (changed_bbs, 0, index, bi)
>        find_uses_to_rename_bb (BASIC_BLOCK_FOR_FN (cfun, index), use_blocks, need_phis);
>    else
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        find_uses_to_rename_bb (bb, use_blocks, need_phis);
>  }
>  
> @@ -602,7 +602,7 @@ verify_loop_closed_ssa (bool verify_ssa_p)
>  
>    timevar_push (TV_VERIFY_LOOP_CLOSED);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (bsi = gsi_start_phis (bb); !gsi_end_p (bsi); gsi_next (&bsi))
>  	{
> diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c
> index f77c016..1c89f45 100644
> --- a/gcc/tree-ssa-math-opts.c
> +++ b/gcc/tree-ssa-math-opts.c
> @@ -527,7 +527,7 @@ execute_cse_reciprocals (void)
>    calculate_dominance_info (CDI_POST_DOMINATORS);
>  
>  #ifdef ENABLE_CHECKING
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      gcc_assert (!bb->aux);
>  #endif
>  
> @@ -540,7 +540,7 @@ execute_cse_reciprocals (void)
>  	  execute_cse_reciprocals_1 (NULL, name);
>        }
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        gimple phi;
> @@ -1419,7 +1419,7 @@ execute_cse_sincos (void)
>    calculate_dominance_info (CDI_DOMINATORS);
>    memset (&sincos_stats, 0, sizeof (sincos_stats));
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>        bool cleanup_eh = false;
> @@ -1939,7 +1939,7 @@ execute_optimize_bswap (void)
>  
>    memset (&bswap_stats, 0, sizeof (bswap_stats));
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> @@ -2785,7 +2785,7 @@ execute_optimize_widening_mul (void)
>  
>    memset (&widen_mul_stats, 0, sizeof (widen_mul_stats));
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> diff --git a/gcc/tree-ssa-propagate.c b/gcc/tree-ssa-propagate.c
> index 55ae68b..f9f084b 100644
> --- a/gcc/tree-ssa-propagate.c
> +++ b/gcc/tree-ssa-propagate.c
> @@ -1097,7 +1097,7 @@ substitute_and_fold (ssa_prop_get_value_fn get_value_fn,
>        }
>  
>    /* Propagate into all uses and fold.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>  
> diff --git a/gcc/tree-ssa-structalias.c b/gcc/tree-ssa-structalias.c
> index 16679f4..9ec1512 100644
> --- a/gcc/tree-ssa-structalias.c
> +++ b/gcc/tree-ssa-structalias.c
> @@ -6778,7 +6778,7 @@ compute_points_to_sets (void)
>    intra_create_variable_infos ();
>  
>    /* Now walk all statements and build the constraint set.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> @@ -6825,7 +6825,7 @@ compute_points_to_sets (void)
>      }
>  
>    /* Compute the call-used/clobbered sets.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi;
>  
> diff --git a/gcc/tree-ssa-tail-merge.c b/gcc/tree-ssa-tail-merge.c
> index a0eac67..4e05246 100644
> --- a/gcc/tree-ssa-tail-merge.c
> +++ b/gcc/tree-ssa-tail-merge.c
> @@ -754,7 +754,7 @@ find_same_succ (void)
>    same_succ same = same_succ_alloc ();
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        find_same_succ_bb (bb, &same);
>        if (same == NULL)
> @@ -1015,7 +1015,7 @@ reset_cluster_vectors (void)
>    for (i = 0; i < all_clusters.length (); ++i)
>      delete_cluster (all_clusters[i]);
>    all_clusters.truncate (0);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      BB_CLUSTER (bb) = NULL;
>  }
>  
> diff --git a/gcc/tree-ssa-ter.c b/gcc/tree-ssa-ter.c
> index fa6a248..22ae47b 100644
> --- a/gcc/tree-ssa-ter.c
> +++ b/gcc/tree-ssa-ter.c
> @@ -683,7 +683,7 @@ find_replaceable_exprs (var_map map)
>  
>    bitmap_obstack_initialize (&ter_bitmap_obstack);
>    table = new_temp_expr_table (map);
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        find_replaceable_in_bb (table, bb);
>        gcc_checking_assert (bitmap_empty_p (table->partition_in_use));
> diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
> index 9289c11..6f978e2 100644
> --- a/gcc/tree-ssa-threadupdate.c
> +++ b/gcc/tree-ssa-threadupdate.c
> @@ -1631,7 +1631,7 @@ thread_through_all_blocks (bool may_peel_loop_headers)
>       ahead and thread it, else ignore it.  */
>    basic_block bb;
>    edge e;
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* If we do end up threading here, we can remove elements from
>  	 BB->preds.  Thus we can not use the FOR_EACH_EDGE iterator.  */
> diff --git a/gcc/tree-ssa-uncprop.c b/gcc/tree-ssa-uncprop.c
> index d38e0dd..63a2e10 100644
> --- a/gcc/tree-ssa-uncprop.c
> +++ b/gcc/tree-ssa-uncprop.c
> @@ -65,7 +65,7 @@ associate_equivalences_with_edges (void)
>  
>    /* Walk over each block.  If the block ends with a control statement,
>       then it might create a useful equivalence.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator gsi = gsi_last_bb (bb);
>        gimple stmt;
> @@ -406,7 +406,7 @@ tree_ssa_uncprop (void)
>    /* we just need to empty elements out of the hash table, and cleanup the
>      AUX field on the edges.  */
>    val_ssa_equiv.dispose ();
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge e;
>        edge_iterator ei;
> diff --git a/gcc/tree-ssa-uninit.c b/gcc/tree-ssa-uninit.c
> index 4fd5fb8..c6b0a90 100644
> --- a/gcc/tree-ssa-uninit.c
> +++ b/gcc/tree-ssa-uninit.c
> @@ -176,7 +176,7 @@ warn_uninitialized_vars (bool warn_possibly_uninitialized)
>    gimple_stmt_iterator gsi;
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        bool always_executed = dominated_by_p (CDI_POST_DOMINATORS,
>  					     single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)), bb);
> @@ -2130,7 +2130,7 @@ execute_late_warn_uninitialized (void)
>    added_to_worklist = pointer_set_create ();
>  
>    /* Initialize worklist  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        {
>          gimple phi = gsi_stmt (gsi);
> diff --git a/gcc/tree-ssa.c b/gcc/tree-ssa.c
> index f1025b2..8c1aaf2 100644
> --- a/gcc/tree-ssa.c
> +++ b/gcc/tree-ssa.c
> @@ -999,7 +999,7 @@ verify_ssa (bool check_modified_stmt)
>  
>    /* Now verify all the uses and make sure they agree with the definitions
>       found in the previous pass.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge e;
>        gimple phi;
> @@ -1456,7 +1456,7 @@ execute_update_addresses_taken (void)
>  
>    /* Collect into ADDRESSES_TAKEN all variables whose address is taken within
>       the function body.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	{
> @@ -1558,7 +1558,7 @@ execute_update_addresses_taken (void)
>       variables and operands need to be rewritten to expose bare symbols.  */
>    if (!bitmap_empty_p (suitable_for_renaming))
>      {
> -      FOR_EACH_BB (bb)
> +      FOR_EACH_BB_FN (bb, cfun)
>  	for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi);)
>  	  {
>  	    gimple stmt = gsi_stmt (gsi);
> diff --git a/gcc/tree-stdarg.c b/gcc/tree-stdarg.c
> index 8b168e0..dc82340 100644
> --- a/gcc/tree-stdarg.c
> +++ b/gcc/tree-stdarg.c
> @@ -536,7 +536,7 @@ check_all_va_list_escapes (struct stdarg_info *si)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>  
> @@ -703,7 +703,7 @@ execute_optimize_stdarg (void)
>  			   || TREE_TYPE (cfun_va_list) == char_type_node);
>    gcc_assert (is_gimple_reg_type (cfun_va_list) == va_list_simple_ptr);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>  
> @@ -813,7 +813,7 @@ execute_optimize_stdarg (void)
>    memset (&wi, 0, sizeof (wi));
>    wi.info = si.va_list_vars;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>  
> diff --git a/gcc/tree-switch-conversion.c b/gcc/tree-switch-conversion.c
> index f6b17b8..efcc94d 100644
> --- a/gcc/tree-switch-conversion.c
> +++ b/gcc/tree-switch-conversion.c
> @@ -1420,7 +1420,7 @@ do_switchconv (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>    {
>      const char *failure_reason;
>      gimple stmt = last_stmt (bb);
> diff --git a/gcc/tree-vect-generic.c b/gcc/tree-vect-generic.c
> index d55485d..098012c 100644
> --- a/gcc/tree-vect-generic.c
> +++ b/gcc/tree-vect-generic.c
> @@ -1541,7 +1541,7 @@ expand_vector_operations (void)
>    basic_block bb;
>    bool cfg_changed = false;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	{
> diff --git a/gcc/tree-vectorizer.c b/gcc/tree-vectorizer.c
> index c11f8a8..e5d201f 100644
> --- a/gcc/tree-vectorizer.c
> +++ b/gcc/tree-vectorizer.c
> @@ -157,7 +157,7 @@ adjust_simduid_builtins (hash_table <simduid_to_vf> &htab)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator i;
>  
> @@ -265,7 +265,7 @@ note_simd_array_uses (hash_table <simd_array_to_simduid> *htab)
>    wi.info = &ns;
>    ns.htab = htab;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        {
>  	gimple stmt = gsi_stmt (gsi);
> @@ -475,7 +475,7 @@ execute_vect_slp (void)
>  
>    init_stmt_vec_info_vec ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        vect_location = find_bb_location (bb);
>  
> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
> index 06b6259..8ab6d76 100644
> --- a/gcc/tree-vrp.c
> +++ b/gcc/tree-vrp.c
> @@ -6431,7 +6431,7 @@ check_all_array_refs (void)
>    basic_block bb;
>    gimple_stmt_iterator si;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        edge_iterator ei;
>        edge e;
> @@ -6593,7 +6593,7 @@ remove_range_assertions (void)
>    /* Note that the BSI iterator bump happens at the bottom of the
>       loop and no bump is necessary if we're removing the statement
>       referenced by the current BSI.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (si = gsi_after_labels (bb), is_unreachable = -1; !gsi_end_p (si);)
>        {
>  	gimple stmt = gsi_stmt (si);
> @@ -6708,7 +6708,7 @@ vrp_initialize (void)
>    vr_value = XCNEWVEC (value_range_t *, num_vr_values);
>    vr_phi_edge_counts = XCNEWVEC (int, num_ssa_names);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple_stmt_iterator si;
>  
> @@ -9543,7 +9543,7 @@ identify_jump_threads (void)
>       I doubt it's worth the effort for the classes of jump
>       threading opportunities we are trying to identify at this
>       point in compilation.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        gimple last;
>  
> diff --git a/gcc/tsan.c b/gcc/tsan.c
> index 4efcfe5..d12459f 100644
> --- a/gcc/tsan.c
> +++ b/gcc/tsan.c
> @@ -640,7 +640,7 @@ instrument_memory_accesses (void)
>    gimple_stmt_iterator gsi;
>    bool fentry_exit_instrument = false;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        fentry_exit_instrument |= instrument_gimple (&gsi);
>    return fentry_exit_instrument;
> diff --git a/gcc/ubsan.c b/gcc/ubsan.c
> index 846e884..51b4f8d 100644
> --- a/gcc/ubsan.c
> +++ b/gcc/ubsan.c
> @@ -741,7 +741,7 @@ ubsan_pass (void)
>    basic_block bb;
>    gimple_stmt_iterator gsi;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi);)
>  	{
> diff --git a/gcc/value-prof.c b/gcc/value-prof.c
> index d509354..c684835 100644
> --- a/gcc/value-prof.c
> +++ b/gcc/value-prof.c
> @@ -542,7 +542,7 @@ verify_histograms (void)
>  
>    error_found = false;
>    visited_hists = pointer_set_create ();
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        {
>  	gimple stmt = gsi_stmt (gsi);
> @@ -648,7 +648,7 @@ gimple_value_profile_transformations (void)
>    gimple_stmt_iterator gsi;
>    bool changed = false;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>  	{
> @@ -1944,7 +1944,7 @@ gimple_find_values_to_profile (histogram_values *values)
>    histogram_value hist = NULL;
>    values->create (0);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
>        gimple_values_to_profile (gsi_stmt (gsi), values);
>  
> diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
> index 5bd0799..175ec01 100644
> --- a/gcc/var-tracking.c
> +++ b/gcc/var-tracking.c
> @@ -6941,7 +6941,7 @@ vt_find_locations (void)
>    in_pending = sbitmap_alloc (last_basic_block_for_fn (cfun));
>    bitmap_clear (in_worklist);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      fibheap_insert (pending, bb_order[bb->index], bb);
>    bitmap_ones (in_pending);
>  
> @@ -7101,7 +7101,7 @@ vt_find_locations (void)
>      }
>  
>    if (success && MAY_HAVE_DEBUG_INSNS)
> -    FOR_EACH_BB (bb)
> +    FOR_EACH_BB_FN (bb, cfun)
>        gcc_assert (VTI (bb)->flooded);
>  
>    free (bb_order);
> @@ -7229,7 +7229,7 @@ dump_dataflow_sets (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        fprintf (dump_file, "\nBasic block %d:\n", bb->index);
>        fprintf (dump_file, "IN:\n");
> @@ -9402,7 +9402,7 @@ vt_emit_notes (void)
>  
>    /* Free memory occupied by the out hash tables, as they aren't used
>       anymore.  */
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      dataflow_set_clear (&VTI (bb)->out);
>  
>    /* Enable emitting notes by functions (mainly by set_variable_part and
> @@ -9418,7 +9418,7 @@ vt_emit_notes (void)
>  
>    dataflow_set_init (&cur);
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        /* Emit the notes for changes of variable locations between two
>  	 subsequent basic blocks.  */
> @@ -9995,7 +9995,7 @@ vt_initialize (void)
>  
>    vt_add_function_parameters ();
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        rtx insn;
>        HOST_WIDE_INT pre, post = 0;
> @@ -10138,7 +10138,7 @@ delete_debug_insns (void)
>    if (!MAY_HAVE_DEBUG_INSNS)
>      return;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        FOR_BB_INSNS_SAFE (bb, insn, next)
>  	if (DEBUG_INSN_P (insn))
> @@ -10181,7 +10181,7 @@ vt_finalize (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB (bb)
> +  FOR_EACH_BB_FN (bb, cfun)
>      {
>        VTI (bb)->mos.release ();
>      }


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 12/13] Eliminate FOR_EACH_BB_REVERSE macro.
  2013-12-06 14:53                     ` [PATCH 12/13] Eliminate FOR_EACH_BB_REVERSE macro David Malcolm
@ 2013-12-07  7:14                       ` Oleg Endo
  0 siblings, 0 replies; 42+ messages in thread
From: Oleg Endo @ 2013-12-07  7:14 UTC (permalink / raw)
  To: David Malcolm; +Cc: Richard Biener, gcc-patches

David,

Could you please also update the use of FOR_EACH_BB_REVERSE in
config/sh/sh_optimize_sett_clrt.cc ?

Thanks,
Oleg

On Fri, 2013-12-06 at 09:51 -0500, David Malcolm wrote:
> gcc/
> 	* basic-block.h (FOR_EACH_BB_REVERSE): Eliminate macro.
> 
> 	* cfghooks.c (verify_flow_info): Replace uses of FOR_EACH_BB_REVERSE
> 	with FOR_EACH_BB_REVERSE_FN, making uses of cfun explicit.
> 	* cfgrtl.c (print_rtl_with_bb, rtl_verify_edges,
> 	rtl_verify_bb_insns, rtl_verify_bb_pointers,
> 	rtl_verify_bb_insn_chain, rtl_verify_fallthru): Likewise.
> 	* config/ia64/ia64.c (emit_predicate_relation_info): Likewise.
> 	* config/sh/sh.c (sh_md_init_global): Likewise.
> 	* dce.c (reset_unmarked_insns_debug_uses, delete_unmarked_insns):
> 	Likewise.
> 	* dominance.c (calc_dfs_tree): Likewise.
> 	* final.c (final): Likewise.
> 	* function.c (thread_prologue_and_epilogue_insns): Likewise.
> 	* gcse.c (compute_code_hoist_vbeinout): Likewise.
> 	* ira.c (update_equiv_regs, build_insn_chain): Likewise.
> 	* lcm.c (compute_antinout_edge): Likewise.
> 	* mode-switching.c (optimize_mode_switching): Likewise.
> 	* postreload.c (reload_combine): Likewise.
> 	* recog.c (split_all_insns, peephole2_optimize): Likewise.
> 	* tree-ssa-live.c (live_worklist): Likewise.
> ---
>  gcc/basic-block.h      |  2 --
>  gcc/cfghooks.c         |  2 +-
>  gcc/cfgrtl.c           | 12 ++++++------
>  gcc/config/ia64/ia64.c |  4 ++--
>  gcc/config/sh/sh.c     |  2 +-
>  gcc/dce.c              |  4 ++--
>  gcc/dominance.c        |  4 ++--
>  gcc/final.c            |  2 +-
>  gcc/function.c         |  2 +-
>  gcc/gcse.c             |  2 +-
>  gcc/ira.c              |  4 ++--
>  gcc/lcm.c              |  2 +-
>  gcc/mode-switching.c   |  4 ++--
>  gcc/postreload.c       |  2 +-
>  gcc/recog.c            |  4 ++--
>  gcc/tree-ssa-live.c    |  2 +-
>  16 files changed, 26 insertions(+), 28 deletions(-)
> 
> diff --git a/gcc/basic-block.h b/gcc/basic-block.h
> index b378a5b..75f16ac 100644
> --- a/gcc/basic-block.h
> +++ b/gcc/basic-block.h
> @@ -336,8 +336,6 @@ struct GTY(()) control_flow_graph {
>  #define FOR_EACH_BB_REVERSE_FN(BB, FN) \
>    FOR_BB_BETWEEN (BB, (FN)->cfg->x_exit_block_ptr->prev_bb, (FN)->cfg->x_entry_block_ptr, prev_bb)
>  
> -#define FOR_EACH_BB_REVERSE(BB) FOR_EACH_BB_REVERSE_FN (BB, cfun)
> -
>  /* For iterating over insns in basic block.  */
>  #define FOR_BB_INSNS(BB, INSN)			\
>    for ((INSN) = BB_HEAD (BB);			\
> diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
> index 2400965..78218b5 100644
> --- a/gcc/cfghooks.c
> +++ b/gcc/cfghooks.c
> @@ -123,7 +123,7 @@ verify_flow_info (void)
>      }
>  
>    /* Now check the basic blocks (boundaries etc.) */
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        int n_fallthru = 0;
>        edge e;
> diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
> index daadd9b..7734ac1 100644
> --- a/gcc/cfgrtl.c
> +++ b/gcc/cfgrtl.c
> @@ -2153,7 +2153,7 @@ print_rtl_with_bb (FILE *outf, const_rtx rtx_first, int flags)
>  
>        if (flags & TDF_BLOCKS)
>  	{
> -	  FOR_EACH_BB_REVERSE (bb)
> +	  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>  	    {
>  	      rtx x;
>  
> @@ -2408,7 +2408,7 @@ rtl_verify_edges (void)
>    int err = 0;
>    basic_block bb;
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        int n_fallthru = 0, n_branch = 0, n_abnormal_call = 0, n_sibcall = 0;
>        int n_eh = 0, n_abnormal = 0;
> @@ -2586,7 +2586,7 @@ rtl_verify_bb_insns (void)
>    int err = 0;
>    basic_block bb;
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        /* Now check the header of basic
>  	 block.  It ought to contain optional CODE_LABEL followed
> @@ -2649,7 +2649,7 @@ rtl_verify_bb_pointers (void)
>    basic_block bb;
>  
>    /* Check the general integrity of the basic blocks.  */
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        rtx insn;
>  
> @@ -2739,7 +2739,7 @@ rtl_verify_bb_insn_chain (void)
>  
>    bb_info = XCNEWVEC (basic_block, max_uid);
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        rtx head = BB_HEAD (bb);
>        rtx end = BB_END (bb);
> @@ -2821,7 +2821,7 @@ rtl_verify_fallthru (void)
>    basic_block bb;
>    int err = 0;
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        edge e;
>  
> diff --git a/gcc/config/ia64/ia64.c b/gcc/config/ia64/ia64.c
> index a837974..99bc094 100644
> --- a/gcc/config/ia64/ia64.c
> +++ b/gcc/config/ia64/ia64.c
> @@ -9613,7 +9613,7 @@ emit_predicate_relation_info (void)
>  {
>    basic_block bb;
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        int r;
>        rtx head = BB_HEAD (bb);
> @@ -9641,7 +9641,7 @@ emit_predicate_relation_info (void)
>       relations around them.  Otherwise the assembler will assume the call
>       returns, and complain about uses of call-clobbered predicates after
>       the call.  */
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        rtx insn = BB_HEAD (bb);
>  
> diff --git a/gcc/config/sh/sh.c b/gcc/config/sh/sh.c
> index 3e907b2..26c8957 100644
> --- a/gcc/config/sh/sh.c
> +++ b/gcc/config/sh/sh.c
> @@ -11110,7 +11110,7 @@ sh_md_init_global (FILE *dump ATTRIBUTE_UNUSED,
>    regmode_weight[1] = (short *) xcalloc (old_max_uid, sizeof (short));
>    r0_life_regions = 0;
>  
> -  FOR_EACH_BB_REVERSE (b)
> +  FOR_EACH_BB_REVERSE_FN (b, cfun)
>    {
>      find_regmode_weight (b, SImode);
>      find_regmode_weight (b, SFmode);
> diff --git a/gcc/dce.c b/gcc/dce.c
> index 3101102..843dfc6 100644
> --- a/gcc/dce.c
> +++ b/gcc/dce.c
> @@ -511,7 +511,7 @@ reset_unmarked_insns_debug_uses (void)
>    basic_block bb;
>    rtx insn, next;
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      FOR_BB_INSNS_REVERSE_SAFE (bb, insn, next)
>        if (DEBUG_INSN_P (insn))
>  	{
> @@ -550,7 +550,7 @@ delete_unmarked_insns (void)
>    rtx insn, next;
>    bool must_clean = false;
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      FOR_BB_INSNS_REVERSE_SAFE (bb, insn, next)
>        if (NONDEBUG_INSN_P (insn))
>  	{
> diff --git a/gcc/dominance.c b/gcc/dominance.c
> index 521b224..69816c1 100644
> --- a/gcc/dominance.c
> +++ b/gcc/dominance.c
> @@ -357,7 +357,7 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
>        basic_block b;
>        bool saw_unconnected = false;
>  
> -      FOR_EACH_BB_REVERSE (b)
> +      FOR_EACH_BB_REVERSE_FN (b, cfun)
>  	{
>  	  if (EDGE_COUNT (b->succs) > 0)
>  	    {
> @@ -376,7 +376,7 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
>  
>        if (saw_unconnected)
>  	{
> -	  FOR_EACH_BB_REVERSE (b)
> +	  FOR_EACH_BB_REVERSE_FN (b, cfun)
>  	    {
>  	      basic_block b2;
>  	      if (di->dfs_order[b->index])
> diff --git a/gcc/final.c b/gcc/final.c
> index f475d27..5526974 100644
> --- a/gcc/final.c
> +++ b/gcc/final.c
> @@ -1996,7 +1996,7 @@ final (rtx first, FILE *file, int optimize_p)
>  
>        /* There is no cfg for a thunk.  */
>        if (!cfun->is_thunk)
> -	FOR_EACH_BB_REVERSE (bb)
> +	FOR_EACH_BB_REVERSE_FN (bb, cfun)
>  	  {
>  	    start_to_bb[INSN_UID (BB_HEAD (bb))] = bb;
>  	    end_to_bb[INSN_UID (BB_END (bb))] = bb;
> diff --git a/gcc/function.c b/gcc/function.c
> index e00f583..e2d0e23 100644
> --- a/gcc/function.c
> +++ b/gcc/function.c
> @@ -6236,7 +6236,7 @@ thread_prologue_and_epilogue_insns (void)
>  	    }
>  	  /* Now duplicate the tails.  */
>  	  if (!bitmap_empty_p (&bb_tail))
> -	    FOR_EACH_BB_REVERSE (bb)
> +	    FOR_EACH_BB_REVERSE_FN (bb, cfun)
>  	      {
>  		basic_block copy_bb, tbb;
>  		rtx insert_point;
> diff --git a/gcc/gcse.c b/gcc/gcse.c
> index a6874ab..fdf0a57 100644
> --- a/gcc/gcse.c
> +++ b/gcc/gcse.c
> @@ -2829,7 +2829,7 @@ compute_code_hoist_vbeinout (void)
>  
>        /* We scan the blocks in the reverse order to speed up
>  	 the convergence.  */
> -      FOR_EACH_BB_REVERSE (bb)
> +      FOR_EACH_BB_REVERSE_FN (bb, cfun)
>  	{
>  	  if (bb->next_bb != EXIT_BLOCK_PTR_FOR_FN (cfun))
>  	    {
> diff --git a/gcc/ira.c b/gcc/ira.c
> index b4ae0ca..7403870 100644
> --- a/gcc/ira.c
> +++ b/gcc/ira.c
> @@ -3772,7 +3772,7 @@ update_equiv_regs (void)
>       within the same loop (or in an inner loop), then move the register
>       initialization just before the use, so that they are in the same
>       basic block.  */
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        loop_depth = bb_loop_depth (bb);
>        for (insn = BB_END (bb);
> @@ -4127,7 +4127,7 @@ build_insn_chain (void)
>    for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
>      if (TEST_HARD_REG_BIT (eliminable_regset, i))
>        bitmap_set_bit (elim_regset, i);
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        bitmap_iterator bi;
>        rtx insn;
> diff --git a/gcc/lcm.c b/gcc/lcm.c
> index 0b528d9..b5d56e0 100644
> --- a/gcc/lcm.c
> +++ b/gcc/lcm.c
> @@ -109,7 +109,7 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
>  
>    /* Put every block on the worklist; this is necessary because of the
>       optimistic initialization of ANTIN above.  */
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        *qin++ = bb;
>        bb->aux = bb;
> diff --git a/gcc/mode-switching.c b/gcc/mode-switching.c
> index 4e31d68..4f68536 100644
> --- a/gcc/mode-switching.c
> +++ b/gcc/mode-switching.c
> @@ -692,7 +692,7 @@ optimize_mode_switching (void)
>  	      insert_insn_on_edge (mode_set, eg);
>  	    }
>  
> -	  FOR_EACH_BB_REVERSE (bb)
> +	  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>  	    if (bitmap_bit_p (del[bb->index], j))
>  	      {
>  		make_preds_opaque (bb, j);
> @@ -712,7 +712,7 @@ optimize_mode_switching (void)
>      {
>        int no_mode = num_modes[entity_map[j]];
>  
> -      FOR_EACH_BB_REVERSE (bb)
> +      FOR_EACH_BB_REVERSE_FN (bb, cfun)
>  	{
>  	  struct seginfo *ptr, *next;
>  	  for (ptr = bb_info[j][bb->index].seginfo; ptr; ptr = next)
> diff --git a/gcc/postreload.c b/gcc/postreload.c
> index bfa5a38..37bd9ff 100644
> --- a/gcc/postreload.c
> +++ b/gcc/postreload.c
> @@ -1281,7 +1281,7 @@ reload_combine (void)
>    label_live = XNEWVEC (HARD_REG_SET, n_labels);
>    CLEAR_HARD_REG_SET (ever_live_at_start);
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        insn = BB_HEAD (bb);
>        if (LABEL_P (insn))
> diff --git a/gcc/recog.c b/gcc/recog.c
> index c59aa0e..dbd9a8a 100644
> --- a/gcc/recog.c
> +++ b/gcc/recog.c
> @@ -2902,7 +2902,7 @@ split_all_insns (void)
>    bitmap_clear (blocks);
>    changed = false;
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        rtx insn, next;
>        bool finish = false;
> @@ -3556,7 +3556,7 @@ peephole2_optimize (void)
>    search_ofs = 0;
>    live = BITMAP_ALLOC (&reg_obstack);
>  
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      {
>        bool past_end = false;
>        int pos;
> diff --git a/gcc/tree-ssa-live.c b/gcc/tree-ssa-live.c
> index da7198b..a37ef85 100644
> --- a/gcc/tree-ssa-live.c
> +++ b/gcc/tree-ssa-live.c
> @@ -1050,7 +1050,7 @@ live_worklist (tree_live_info_p live)
>  
>    /* Visit all the blocks in reverse order and propagate live on entry values
>       into the predecessors blocks.  */
> -  FOR_EACH_BB_REVERSE (bb)
> +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
>      loe_visit_block (live, bb, visited, tmp);
>  
>    /* Process any blocks which require further iteration.  */


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] Eliminate last_basic_block macro.
  2013-12-06 20:25                           ` Richard Biener
@ 2013-12-09 21:48                             ` David Malcolm
  2013-12-09 21:53                               ` Oleg Endo
  2013-12-10  9:43                               ` Richard Biener
  0 siblings, 2 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-09 21:48 UTC (permalink / raw)
  To: Richard Biener; +Cc: Oleg Endo, Steven Bosscher, gcc-patches

On Fri, 2013-12-06 at 21:27 +0100, Richard Biener wrote:
> Oleg Endo <oleg.endo@t-online.de> wrote:
> >On Fri, 2013-12-06 at 16:57 +0100, Steven Bosscher wrote:
> >> On Fri, Dec 6, 2013 at 3:51 PM, David Malcolm wrote:
> >> >         * asan.c (transform_statements): Eliminate use of
> >last_basic_block
> >> >         in favor of last_basic_block_for_fn, in order to make use
> >of cfun
> >> >         explicit.
> >> 
> >> Can we please make all this _for_fn go away?
> >> 
> >
> >Sorry if this has been discussed before... but why not adding member
> >functions to 'function' instead of freestanding macros/functions that
> >take a function* as a first argument?  This would also make it easier
> >to
> >eliminate the "_for_fn" (freestanding function/macro name clashes etc)
> >I
> >think.
> 
> Both can be done, but these patches make cfun uses explicit which was the goal while following existing practice.

Yes, longer-term I'd prefer member functions.  The approach I posted
approach gives identical results to the status quo after a trip through
the preprocessor, so is somewhat lower-risk than introducing inlinable
member functions. (and in any case, all of the repeated implicit
dereferencing of "cfun->" seems inefficient to me, but not something I
plan to touch in stage3)

I've gone ahead and committed the patch series to trunk, test-building
before each commit, and fixing up patches 11 and 12 for the issues noted
by Oleg (the config/sh files had .cc suffixes, and hence didn't show up
in my grepping; I updated my grep accordingly).

There are still 4 macros in function.h that implicitly use cfun, which
it's less clear to me how to remove:
        #define current_function_funcdef_no
        #define current_loops
        #define dom_computed
        #define n_bbs_in_dom_tree

plus various other cfun-using macros elsewhere in headers...

FWIW, here are the svn revisions of what I committed, vs the numbering
of the patches in the emails:
  0001: r205816
  0002: r205817
  0003: r205818
  0004: r205820
  0005: r205821
  0006: r205822
  0007: r205823
  0008: r205824
  0009: r205825
  0010: r205826
  0011: r205828
  0012: r205829
  0013: r205830

Hope this is all sane
Dave


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] Eliminate last_basic_block macro.
  2013-12-09 21:48                             ` David Malcolm
@ 2013-12-09 21:53                               ` Oleg Endo
  2013-12-10  9:43                               ` Richard Biener
  1 sibling, 0 replies; 42+ messages in thread
From: Oleg Endo @ 2013-12-09 21:53 UTC (permalink / raw)
  To: David Malcolm; +Cc: Richard Biener, Steven Bosscher, gcc-patches

On Mon, 2013-12-09 at 16:47 -0500, David Malcolm wrote:
> Yes, longer-term I'd prefer member functions.  The approach I posted
> approach gives identical results to the status quo after a trip through
> the preprocessor, so is somewhat lower-risk than introducing inlinable
> member functions. (and in any case, all of the repeated implicit
> dereferencing of "cfun->" seems inefficient to me, but not something I
> plan to touch in stage3)

Understandable.

> I've gone ahead and committed the patch series to trunk, test-building
> before each commit, and fixing up patches 11 and 12 for the issues noted
> by Oleg (the config/sh files had .cc suffixes, and hence didn't show up
> in my grepping; I updated my grep accordingly).

Thanks!

Cheers,
Oleg

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h
  2013-12-06 15:39                     ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h Richard Biener
@ 2013-12-09 22:07                       ` David Malcolm
  0 siblings, 0 replies; 42+ messages in thread
From: David Malcolm @ 2013-12-09 22:07 UTC (permalink / raw)
  To: Richard Biener; +Cc: Richard Biener, gcc-patches

On Fri, 2013-12-06 at 16:41 +0100, Richard Biener wrote:
> David Malcolm <dmalcolm@redhat.com> wrote:
> >I have a series of 13 follow-up patches which remove the remaining
> >"cfun"-using macros from basic-block.h
> >
> >Successfully bootstrapped&regtested on x86_64-unknown-linux-gnu.
> >
> >These were pre-approved in stage1, and are mechanical in nature [1]
> >
> >I'd like to apply these to trunk now, but given that we're now in
> >stage3, do I need to wait until the next stage1?
> 
> No, its ok now.
Thanks; as noted elsewhere, I've committed these now.

[...]

> After the patches the macros should be removed so that no new uses appear.

Done.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 10/13] Eliminate last_basic_block macro.
  2013-12-09 21:48                             ` David Malcolm
  2013-12-09 21:53                               ` Oleg Endo
@ 2013-12-10  9:43                               ` Richard Biener
  1 sibling, 0 replies; 42+ messages in thread
From: Richard Biener @ 2013-12-10  9:43 UTC (permalink / raw)
  To: David Malcolm; +Cc: Oleg Endo, Steven Bosscher, gcc-patches

On Mon, 9 Dec 2013, David Malcolm wrote:

> On Fri, 2013-12-06 at 21:27 +0100, Richard Biener wrote:
> > Oleg Endo <oleg.endo@t-online.de> wrote:
> > >On Fri, 2013-12-06 at 16:57 +0100, Steven Bosscher wrote:
> > >> On Fri, Dec 6, 2013 at 3:51 PM, David Malcolm wrote:
> > >> >         * asan.c (transform_statements): Eliminate use of
> > >last_basic_block
> > >> >         in favor of last_basic_block_for_fn, in order to make use
> > >of cfun
> > >> >         explicit.
> > >> 
> > >> Can we please make all this _for_fn go away?
> > >> 
> > >
> > >Sorry if this has been discussed before... but why not adding member
> > >functions to 'function' instead of freestanding macros/functions that
> > >take a function* as a first argument?  This would also make it easier
> > >to
> > >eliminate the "_for_fn" (freestanding function/macro name clashes etc)
> > >I
> > >think.
> > 
> > Both can be done, but these patches make cfun uses explicit which was the goal while following existing practice.
> 
> Yes, longer-term I'd prefer member functions.  The approach I posted
> approach gives identical results to the status quo after a trip through
> the preprocessor, so is somewhat lower-risk than introducing inlinable
> member functions. (and in any case, all of the repeated implicit
> dereferencing of "cfun->" seems inefficient to me, but not something I
> plan to touch in stage3)
> 
> I've gone ahead and committed the patch series to trunk, test-building
> before each commit, and fixing up patches 11 and 12 for the issues noted
> by Oleg (the config/sh files had .cc suffixes, and hence didn't show up
> in my grepping; I updated my grep accordingly).
> 
> There are still 4 macros in function.h that implicitly use cfun, which
> it's less clear to me how to remove:
>         #define current_function_funcdef_no

fundef_no_for_fn (cfun)

>         #define current_loops

loops_for_fn (cfun)

>         #define dom_computed

less obvious - we have DOM info computed only for a single function
throughout the compilation (so rooting DOM info from struct function
is somewhat odd).  I wouldn't touch it unless the DOM API gets
a _fn API variant (if that is desired at all).

>         #define n_bbs_in_dom_tree

Likewise.

As of using more C++ I was thinking about providing context by
means of adding accessors to gimple_opt_pass that automagically
provide the 'cfun' argument.  That of course means making
passes really classes derived from gimple_opt_pass.  The idea
is that from being a gimple_opt_pass you know you are working
with a single function (and the pass instance can have a pointer
to it, to get rid of 'cfun') and that there should be a convenient
API to use from such context where the function you work with
is implicit.

Of course that would overload gimple_opt_pass with various API
wrappers (or we'd use multiple inheritance and API objects).

Richard.

> plus various other cfun-using macros elsewhere in headers...
> 
> FWIW, here are the svn revisions of what I committed, vs the numbering
> of the patches in the emails:
>   0001: r205816
>   0002: r205817
>   0003: r205818
>   0004: r205820
>   0005: r205821
>   0006: r205822
>   0007: r205823
>   0008: r205824
>   0009: r205825
>   0010: r205826
>   0011: r205828
>   0012: r205829
>   0013: r205830
> 
> Hope this is all sane
> Dave
> 
> 
> 

-- 
Richard Biener <rguenther@suse.de>
SUSE / SUSE Labs
SUSE LINUX Products GmbH - Nuernberg - AG Nuernberg - HRB 16746
GF: Jeff Hawn, Jennifer Guild, Felix Imend"orffer

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2013-12-10  9:43 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-13 10:52 [PATCH] Avoid some unnecessary set_cfun calls Jakub Jelinek
2013-11-13 11:17 ` Richard Biener
2013-11-13 11:27   ` Jakub Jelinek
2013-11-13 11:38     ` Richard Biener
2013-11-13 11:45       ` Jakub Jelinek
2013-11-13 11:51         ` Richard Biener
2013-11-16 12:58     ` Richard Sandiford
2013-11-13 14:13 ` Martin Jambor
2013-11-13 14:20   ` Richard Biener
2013-11-13 14:40     ` Martin Jambor
2013-11-13 14:46     ` David Malcolm
2013-11-13 15:22       ` Richard Biener
2013-11-16 10:49         ` [PATCH] Eliminate n_basic_blocks macro (was Re: [PATCH] Avoid some unnecessary set_cfun calls) David Malcolm
2013-11-19  5:27           ` David Malcolm
2013-11-19  9:19             ` Richard Biener
2013-11-19 17:29               ` Committed: removal of n_edges macro David Malcolm
2013-11-19 17:33                 ` Richard Biener
2013-11-20  1:12                 ` Committed: removal of ENTRY_BLOCK_PTR and EXIT_BLOCK_PTR macros David Malcolm
2013-12-06 14:52                   ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h David Malcolm
2013-12-06 14:52                     ` [PATCH 01/13] Rename macros (basic_block_info_for_function, BASIC_BLOCK_FOR_FUNCTION, SET_BASIC_BLOCK_FOR_FUNCTION) David Malcolm
2013-12-06 14:53                     ` [PATCH 04/13] Rename profile_status_for_function to profile_status_for_fn David Malcolm
2013-12-06 14:53                     ` [PATCH 03/13] Rename label_to_block_map_for_function to label_to_block_map_for_fn David Malcolm
2013-12-06 14:53                     ` [PATCH 07/13] Eliminate basic_block_info macro David Malcolm
2013-12-06 14:53                     ` [PATCH 02/13] Rename last_basic_block_for_function to last_basic_block_for_fn David Malcolm
2013-12-06 14:53                     ` [PATCH 05/13] Eliminate SET_BASIC_BLOCK macro David Malcolm
2013-12-06 14:53                     ` [PATCH 12/13] Eliminate FOR_EACH_BB_REVERSE macro David Malcolm
2013-12-07  7:14                       ` Oleg Endo
2013-12-06 15:08                     ` [PATCH 11/13] Eliminate FOR_EACH_BB macro David Malcolm
2013-12-07  7:13                       ` Oleg Endo
2013-12-06 15:08                     ` [PATCH 09/13] Eliminate profile_status macro David Malcolm
2013-12-06 15:08                     ` [PATCH 08/13] Eliminate label_to_block_map macro David Malcolm
2013-12-06 15:09                     ` [PATCH 10/13] Eliminate last_basic_block macro David Malcolm
2013-12-06 15:58                       ` Steven Bosscher
2013-12-06 18:57                         ` Oleg Endo
2013-12-06 20:25                           ` Richard Biener
2013-12-09 21:48                             ` David Malcolm
2013-12-09 21:53                               ` Oleg Endo
2013-12-10  9:43                               ` Richard Biener
2013-12-06 15:12                     ` [PATCH 13/13] Eliminate FOR_ALL_BB macro David Malcolm
2013-12-06 15:12                     ` [PATCH 06/13] Eliminate BASIC_BLOCK macro David Malcolm
2013-12-06 15:39                     ` [PATCH 00/13] Remove remaining cfun-using macros from basic-block.h Richard Biener
2013-12-09 22:07                       ` David Malcolm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).