This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] Eliminate n_basic_blocks macro (was Re: [PATCH] Avoid some unnecessary set_cfun calls)


On Wed, 2013-11-13 at 14:44 +0100, Richard Biener wrote:
> On Wed, 13 Nov 2013, David Malcolm wrote:
> 
> > On Wed, 2013-11-13 at 13:53 +0100, Richard Biener wrote:
> > > On Wed, 13 Nov 2013, Martin Jambor wrote:
> > > 
> > > > Hi,
> > > > 
> > > > On Wed, Nov 13, 2013 at 10:49:09AM +0100, Jakub Jelinek wrote:
> > > > > Hi!
> > > > > 
> > > > > void f1 (void) {}
> > > > > __attribute__((target ("avx"))) void f2 (void) {}
> > > > > __attribute__((target ("avx2"))) void f3 (void) {}
> > > > > __attribute__((target ("sse3"))) void f4 (void) {}
> > > > > __attribute__((target ("ssse3"))) void f5 (void) {}
> > > > > __attribute__((target ("sse4"))) void f6 (void) {}
> > > > > takes about 3 seconds to compile at -O2, because set_cfun is terribly
> > > > > expensive and there are hundreds of such calls.
> > > > > The following patch is just a quick change to avoid some of them:
> > > > > execute_function_todo starts with:
> > > > >   unsigned int flags = (size_t)data;
> > > > >   flags &= ~cfun->last_verified;
> > > > >   if (!flags)
> > > > >     return;
> > > > > and if flags is initially zero, it does nothing.
> > > > > Similarly, execute_function_dump has the whole body surrounded by
> > > > >   if (dump_file && current_function_decl)
> > > > > and thus if dump_file is NULL, there is nothing to do.
> > > > > So IMHO in neither case (which happens pretty frequently) we need to
> > > > > set_cfun to every function during IPA.
> > > > > 
> > > > > Also, I wonder if we couldn't defer the expensive ira_init, if the info
> > > > > computed by it is used only during RTL optimization passes (haven't verified
> > > > > it yet), then supposedly we could just remember using some target hook
> > > > > what the last state was when we did ira_init last time, and call ira_init
> > > > > again at the start of expansion or so if it is different from the
> > > > > last time.
> > > > 
> > > > I was wondering whether the expensive parts of set_cfun could only be
> > > > run in pass_all_optimizations (and the -Og equivalent) but not when
> > > > changing functions in early and IPA passes.
> > > 
> > > Sounds like a hack ;)
> > > 
> > > Better get things working without the cfun/current_function_decl globals.
> > > Wasn't there someone replacing all implicit uses with explicit ones
> > > for stuff like n_basic_blocks?
> > 
> > I was working on this:
> > http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00780.html
> > though I switched to other tasks I felt were higher priority; sorry.
> > 
> > Do you still want me to go ahead and commit the series of changes you
> > pre-approved there?
> > 
> > i.e. the "n_basic_blocks" macro goes away in favor of:
> >    n_basic_blocks_for_fn (cfun)
> > as a renaming of the existing n_basic_blocks_for_function macro,
> > followed up by analogous changes to the other macros.
> > 
> > Or should I repost before committing?
> 
> I'd say create the n_basic_blocks patch and post it, that gives
> people a chance to object.  If nobody chimes in I approve it
> and pre-approve the rest ;)
> 
> Using n_basic_blocks_for_fn (cfun) might feel backwards if
> eventually we'd want to C++-ify struct function and make
> n_basic_blocks a member function which would make it
> cfun->n_basic_blocks () instead.  Ok, I think that will get
> us into C++ bikeshedding again ;)

[I can't face another C vs C++ discussion right now :)]

Thanks.  Attached is such a patch, eliminating the:
  n_basic_blocks
macro in favor of
  n_basic_blocks_for_fn (cfun)

Successfully bootstrapped on x86_64-unknown-linux-gnu, and successfully
compiled stage1 on spu-unknown-elf and s390-linux-gnu (given that those
config files are affected).

Given the conditional pre-approval above, I'm posting here to give
people a change to object - otherwise I'll commit, and followup with the
other macros that implicitly use cfun as per the thread linked to above.


commit a53ca61da66612f7daba8e5e0a26faeaa699507f
Author: David Malcolm <dmalcolm@redhat.com>
Date:   Fri Nov 1 13:29:39 2013 -0400

    Eliminate n_basic_blocks macro
    
    gcc/
    	* basic-block.h (n_basic_blocks_for_function): Rename macro to...
    	(n_basic_blocks_for_fn): ...this.
    
    	(n_basic_blocks): Eliminate macro as work towards making uses of
    	cfun be explicit.
    
    	* cfgloop.c (init_loops_structure): Update for renaming of
    	"n_basic_blocks_for_function" to "n_basic_blocks_for_fn".
    	* graph.c (draw_cfg_nodes_no_loops): Likewise.
    	* ipa-utils.c (ipa_merge_profiles): Likewise.
    	* lto-streamer-in.c (make_new_block): Likewise.
    	* tree-cfg.c (init_empty_tree_cfg_for_function): Likewise.
    	(dump_function_to_file): Likewise.
    
    	* alias.c (init_alias_analysis): Replace usage of "n_basic_blocks"
    	macro with "n_basic_blocks_for_fn (cfun)".
    	* bb-reorder.c (partition_hot_cold_basic_blocks): Likewise.
    	(duplicate_computed_gotos): Likewise.
    	(reorder_basic_blocks): Likewise.
    	* bt-load.c (augment_live_range): Likewise.
    	* cfg.c (expunge_block): Likewise.
    	(compact_blocks): Likewise.
    	* cfganal.c (single_pred_before_succ_order): Likewise.
    	(compute_idf): Likewise.
    	(flow_dfs_compute_reverse_init): Likewise.
    	(pre_and_rev_post_order_compute): Likewise.
    	(pre_and_rev_post_order_compute_fn): Likewise.
    	(inverted_post_order_compute): Likewise.
    	(post_order_compute): Likewise.
    	(print_edge_list): Likewise.
    	(find_unreachable_blocks): Likewise.
    	(mark_dfs_back_edges): Likewise.
    	* cfgcleanup.c (try_optimize_cfg): Likewise.
    	(try_forward_edges): Likewise.
    	* cfghooks.c (dump_flow_info): Likewise.
    	* cfgloop.c (verify_loop_structure): Likewise.
    	(get_loop_body): Likewise.
    	(flow_loops_find): Likewise.
    	* cfgloopmanip.c (add_loop): Likewise.
    	(remove_path): Likewise.
    	(find_path): Likewise.
    	* cfgrtl.c (rtl_flow_call_edges_add): Likewise.
    	(rtl_verify_bb_layout): Likewise.
    	(entry_of_function): Likewise.
    	(rtl_create_basic_block): Likewise.
    	* coverage.c (coverage_compute_cfg_checksum): Likewise.
    	* cprop.c (one_cprop_pass): Likewise.
    	(is_too_expensive): Likewise.
    	* df-core.c (df_compute_cfg_image): Likewise.
    	(df_compact_blocks): Likewise.
    	(df_worklist_dataflow_doublequeue): Likewise.
    	* dominance.c (calculate_dominance_info): Likewise.
    	(calc_dfs_tree): Likewise.
    	(calc_dfs_tree_nonrec): Likewise.
    	(init_dom_info): Likewise.
    	* domwalk.c (cmp_bb_postorder): Likewise.
    	* function.c (thread_prologue_and_epilogue_insns): Likewise.
    	(generate_setjmp_warnings): Likewise.
    	* fwprop.c (build_single_def_use_links): Likewise.
    	* gcse.c (is_too_expensive): Likewise.
    	(one_code_hoisting_pass): Likewise.
    	(one_pre_gcse_pass): Likewise.
    	* graphite.c (graphite_initialize): Likewise.
    	* haifa-sched.c (haifa_sched_init): Likewise.
    	* ipa-inline-analysis.c (estimate_function_body_sizes): Likewise.
    	* ira-build.c (ira_build): Likewise.
    	* lcm.c (compute_nearerout): Likewise.
    	(compute_available): Likewise.
    	(compute_laterin): Likewise.
    	(compute_antinout_edge): Likewise.
    	* lra-lives.c (lra_create_live_ranges): Likewise.
    	* lra.c (has_nonexceptional_receiver): Likewise.
    	* mcf.c (create_fixup_graph): Likewise.
    	* profile.c (branch_prob): Likewise.
    	* reg-stack.c (convert_regs_2): Likewise.
    	* regrename.c (regrename_analyze): Likewise.
    	* reload1.c (has_nonexceptional_receiver): Likewise.
    	* reorg.c (dbr_schedule): Likewise.
    	* sched-deps.c (sched_deps_init): Likewise.
    	* sched-ebb.c (schedule_ebbs): Likewise.
    	* sched-rgn.c (extend_regions): Likewise.
    	(schedule_insns): Likewise.
    	(sched_rgn_init): Likewise.
    	(extend_rgns): Likewise.
    	(haifa_find_rgns): Likewise.
    	* sel-sched-ir.c (recompute_rev_top_order): Likewise.
    	(sel_recompute_toporder): Likewise.
    	* sel-sched.c (run_selective_scheduling): Likewise.
    	* store-motion.c (one_store_motion_pass): Likewise.
    	(remove_reachable_equiv_notes): Likewise.
    	* tracer.c (tracer): Likewise.
    	(tail_duplicate): Likewise.
    	* tree-cfg.c (gimple_flow_call_edges_add): Likewise.
    	(dump_cfg_stats): Likewise.
    	(gimple_dump_cfg): Likewise.
    	(create_bb): Likewise.
    	(build_gimple_cfg): Likewise.
    	* tree-cfgcleanup.c (merge_phi_nodes): Likewise.
    	* tree-inline.c (optimize_inline_calls): Likewise.
    	(fold_marked_statements): Likewise.
    	* tree-ssa-ifcombine.c (tree_ssa_ifcombine): Likewise.
    	* tree-ssa-loop-ch.c (copy_loop_headers): Likewise.
    	* tree-ssa-loop-im.c (analyze_memory_references): Likewise.
    	* tree-ssa-loop-manip.c (compute_live_loop_exits): Likewise.
    	* tree-ssa-math-opts.c (execute_cse_reciprocals): Likewise.
    	* tree-ssa-phiopt.c (tree_ssa_phiopt_worker): Likewise.
    	* tree-ssa-pre.c (do_pre): Likewise.
    	(init_pre): Likewise.
    	(compute_avail): Likewise.
    	* tree-ssa-reassoc.c (init_reassoc): Likewise.
    	* tree-ssa-sccvn.c (init_scc_vn): Likewise.
    	* tree-ssa-tail-merge.c (alloc_cluster_vectors): Likewise.
    	(init_worklist): Likewise.
    	* tree-ssa-uncprop.c (associate_equivalences_with_edges): Likewise.
    	* var-tracking.c (variable_tracking_main_1): Likewise.
    	(vt_find_locations): Likewise.
    	(vt_stack_adjustments): Likewise.
    	* config/s390/s390.c (s390_optimize_nonescaping_tx): Likewise.
    	* config/spu/spu.c (spu_machine_dependent_reorg): Likewise.

diff --git a/gcc/alias.c b/gcc/alias.c
index 1736169..0cf5655 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -2952,7 +2952,7 @@ init_alias_analysis (void)
      The state of the arrays for the set chain in question does not matter
      since the program has undefined behavior.  */
 
-  rpo = XNEWVEC (int, n_basic_blocks);
+  rpo = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
   rpo_cnt = pre_and_rev_post_order_compute (NULL, rpo, false);
 
   pass = 0;
diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 9c28f14..74384b2 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -315,7 +315,7 @@ struct GTY(()) control_flow_graph {
 #define ENTRY_BLOCK_PTR_FOR_FUNCTION(FN)     ((FN)->cfg->x_entry_block_ptr)
 #define EXIT_BLOCK_PTR_FOR_FUNCTION(FN)	     ((FN)->cfg->x_exit_block_ptr)
 #define basic_block_info_for_function(FN)    ((FN)->cfg->x_basic_block_info)
-#define n_basic_blocks_for_function(FN)	     ((FN)->cfg->x_n_basic_blocks)
+#define n_basic_blocks_for_fn(FN)	     ((FN)->cfg->x_n_basic_blocks)
 #define n_edges_for_function(FN)	     ((FN)->cfg->x_n_edges)
 #define last_basic_block_for_function(FN)    ((FN)->cfg->x_last_basic_block)
 #define label_to_block_map_for_function(FN)  ((FN)->cfg->x_label_to_block_map)
@@ -330,7 +330,6 @@ struct GTY(()) control_flow_graph {
 #define ENTRY_BLOCK_PTR		(cfun->cfg->x_entry_block_ptr)
 #define EXIT_BLOCK_PTR		(cfun->cfg->x_exit_block_ptr)
 #define basic_block_info	(cfun->cfg->x_basic_block_info)
-#define n_basic_blocks		(cfun->cfg->x_n_basic_blocks)
 #define n_edges			(cfun->cfg->x_n_edges)
 #define last_basic_block	(cfun->cfg->x_last_basic_block)
 #define label_to_block_map	(cfun->cfg->x_label_to_block_map)
diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index 8e2348f..45bf128 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -2220,7 +2220,7 @@ reorder_basic_blocks (void)
 
   gcc_assert (current_ir_type () == IR_RTL_CFGLAYOUT);
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1)
     return;
 
   set_edge_can_fallthru_flag ();
@@ -2244,7 +2244,7 @@ reorder_basic_blocks (void)
       bbd[i].node = NULL;
     }
 
-  traces = XNEWVEC (struct trace, n_basic_blocks);
+  traces = XNEWVEC (struct trace, n_basic_blocks_for_fn (cfun));
   n_traces = 0;
   find_traces (&n_traces, traces);
   connect_traces (n_traces, traces);
@@ -2388,7 +2388,7 @@ duplicate_computed_gotos (void)
   bitmap candidates;
   int max_size;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   clear_bb_flags ();
@@ -2640,7 +2640,7 @@ partition_hot_cold_basic_blocks (void)
 {
   vec<edge> crossing_edges;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   df_set_flags (DF_DEFER_INSN_RESCAN);
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index 5384d01..348e40b 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -900,7 +900,7 @@ augment_live_range (bitmap live_range, HARD_REG_SET *btrs_live_in_range,
 {
   basic_block *worklist, *tos;
 
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
   if (dominated_by_p (CDI_DOMINATORS, new_bb, head_bb))
     {
diff --git a/gcc/cfg.c b/gcc/cfg.c
index cfada73..10791a7 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -169,12 +169,12 @@ compact_blocks (void)
 	  bb->index = i;
 	  i++;
 	}
-      gcc_assert (i == n_basic_blocks);
+      gcc_assert (i == n_basic_blocks_for_fn (cfun));
 
       for (; i < last_basic_block; i++)
 	SET_BASIC_BLOCK (i, NULL);
     }
-  last_basic_block = n_basic_blocks;
+  last_basic_block = n_basic_blocks_for_fn (cfun);
 }
 
 /* Remove block B from the basic block array.  */
@@ -184,7 +184,7 @@ expunge_block (basic_block b)
 {
   unlink_block (b);
   SET_BASIC_BLOCK (b->index, NULL);
-  n_basic_blocks--;
+  n_basic_blocks_for_fn (cfun)--;
   /* We should be able to ggc_free here, but we are not.
      The dead SSA_NAMES are left pointing to dead statements that are pointing
      to dead basic blocks making garbage collector to die.
diff --git a/gcc/cfganal.c b/gcc/cfganal.c
index b221611..1c90f8c 100644
--- a/gcc/cfganal.c
+++ b/gcc/cfganal.c
@@ -76,7 +76,7 @@ mark_dfs_back_edges (void)
   post = XCNEWVEC (int, last_basic_block);
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -152,7 +152,7 @@ find_unreachable_blocks (void)
   edge_iterator ei;
   basic_block *tos, *worklist, bb;
 
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
 
   /* Clear all the reachability flags.  */
 
@@ -256,7 +256,7 @@ print_edge_list (FILE *f, struct edge_list *elist)
   int x;
 
   fprintf (f, "Compressed edge list, %d BBs + entry & exit, and %d edges\n",
-	   n_basic_blocks, elist->num_edges);
+	   n_basic_blocks_for_fn (cfun), elist->num_edges);
 
   for (x = 0; x < elist->num_edges; x++)
     {
@@ -609,7 +609,7 @@ post_order_compute (int *post_order, bool include_entry_exit,
     post_order[post_order_num++] = EXIT_BLOCK;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -667,7 +667,7 @@ post_order_compute (int *post_order, bool include_entry_exit,
 
   /* Delete the unreachable blocks if some were found and we are
      supposed to do it.  */
-  if (delete_unreachable && (count != n_basic_blocks))
+  if (delete_unreachable && (count != n_basic_blocks_for_fn (cfun)))
     {
       basic_block b;
       basic_block next_bb;
@@ -762,7 +762,7 @@ inverted_post_order_compute (int *post_order)
   sbitmap visited;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -898,11 +898,11 @@ pre_and_rev_post_order_compute_fn (struct function *fn,
   edge_iterator *stack;
   int sp;
   int pre_order_num = 0;
-  int rev_post_order_num = n_basic_blocks - 1;
+  int rev_post_order_num = n_basic_blocks_for_fn (cfun) - 1;
   sbitmap visited;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   if (include_entry_exit)
@@ -1000,11 +1000,12 @@ pre_and_rev_post_order_compute (int *pre_order, int *rev_post_order,
 					 include_entry_exit);
   if (include_entry_exit)
     /* The number of nodes visited should be the number of blocks.  */
-    gcc_assert (pre_order_num == n_basic_blocks);
+    gcc_assert (pre_order_num == n_basic_blocks_for_fn (cfun));
   else
     /* The number of nodes visited should be the number of blocks minus
        the entry and exit blocks which are not visited here.  */
-    gcc_assert (pre_order_num == n_basic_blocks - NUM_FIXED_BLOCKS);
+    gcc_assert (pre_order_num
+		== (n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS));
 
   return pre_order_num;
 }
@@ -1043,7 +1044,7 @@ static void
 flow_dfs_compute_reverse_init (depth_first_search_ds data)
 {
   /* Allocate stack for back-tracking up CFG.  */
-  data->stack = XNEWVEC (basic_block, n_basic_blocks);
+  data->stack = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   data->sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -1275,7 +1276,7 @@ compute_idf (bitmap def_blocks, bitmap_head *dfs)
   bitmap phi_insertion_points;
 
   /* Each block can appear at most twice on the work-stack.  */
-  work_stack.create (2 * n_basic_blocks);
+  work_stack.create (2 * n_basic_blocks_for_fn (cfun));
   phi_insertion_points = BITMAP_ALLOC (NULL);
 
   /* Seed the work list with all the blocks in DEF_BLOCKS.  We use
@@ -1493,8 +1494,8 @@ basic_block *
 single_pred_before_succ_order (void)
 {
   basic_block x, y;
-  basic_block *order = XNEWVEC (basic_block, n_basic_blocks);
-  unsigned n = n_basic_blocks - NUM_FIXED_BLOCKS;
+  basic_block *order = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
+  unsigned n = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
   unsigned np, i;
   sbitmap visited = sbitmap_alloc (last_basic_block);
 
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index 5161190..a2192cb 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -459,7 +459,7 @@ try_forward_edges (int mode, basic_block b)
 	  && find_reg_note (BB_END (first), REG_CROSSING_JUMP, NULL_RTX))
 	return changed;
 
-      while (counter < n_basic_blocks)
+      while (counter < n_basic_blocks_for_fn (cfun))
 	{
 	  basic_block new_target = NULL;
 	  bool new_target_threaded = false;
@@ -472,7 +472,7 @@ try_forward_edges (int mode, basic_block b)
 	      /* Bypass trivial infinite loops.  */
 	      new_target = single_succ (target);
 	      if (target == new_target)
-		counter = n_basic_blocks;
+		counter = n_basic_blocks_for_fn (cfun);
 	      else if (!optimize)
 		{
 		  /* When not optimizing, ensure that edges or forwarder
@@ -521,7 +521,8 @@ try_forward_edges (int mode, basic_block b)
 	      if (t)
 		{
 		  if (!threaded_edges)
-		    threaded_edges = XNEWVEC (edge, n_basic_blocks);
+		    threaded_edges = XNEWVEC (edge,
+					      n_basic_blocks_for_fn (cfun));
 		  else
 		    {
 		      int i;
@@ -533,7 +534,7 @@ try_forward_edges (int mode, basic_block b)
 			  break;
 		      if (i < nthreaded_edges)
 			{
-			  counter = n_basic_blocks;
+			  counter = n_basic_blocks_for_fn (cfun);
 			  break;
 			}
 		    }
@@ -542,7 +543,9 @@ try_forward_edges (int mode, basic_block b)
 		  if (t->dest == b)
 		    break;
 
-		  gcc_assert (nthreaded_edges < n_basic_blocks - NUM_FIXED_BLOCKS);
+		  gcc_assert (nthreaded_edges
+			      < (n_basic_blocks_for_fn (cfun)
+				 - NUM_FIXED_BLOCKS));
 		  threaded_edges[nthreaded_edges++] = t;
 
 		  new_target = t->dest;
@@ -558,7 +561,7 @@ try_forward_edges (int mode, basic_block b)
 	  threaded |= new_target_threaded;
 	}
 
-      if (counter >= n_basic_blocks)
+      if (counter >= n_basic_blocks_for_fn (cfun))
 	{
 	  if (dump_file)
 	    fprintf (dump_file, "Infinite loop in BB %i.\n",
@@ -2713,7 +2716,7 @@ try_optimize_cfg (int mode)
 		  /* Note that forwarder_block_p true ensures that
 		     there is a successor for this block.  */
 		  && (single_succ_edge (b)->flags & EDGE_FALLTHRU)
-		  && n_basic_blocks > NUM_FIXED_BLOCKS + 1)
+		  && n_basic_blocks_for_fn (cfun) > NUM_FIXED_BLOCKS + 1)
 		{
 		  if (dump_file)
 		    fprintf (dump_file,
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index c12a62f..3016c54 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -323,7 +323,8 @@ dump_flow_info (FILE *file, int flags)
 {
   basic_block bb;
 
-  fprintf (file, "\n%d basic blocks, %d edges.\n", n_basic_blocks, n_edges);
+  fprintf (file, "\n%d basic blocks, %d edges.\n", n_basic_blocks_for_fn (cfun),
+	   n_edges);
   FOR_ALL_BB (bb)
     dump_bb (file, bb, 0, flags);
 
diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
index 3ff8e84..d5abf89 100644
--- a/gcc/cfgloop.c
+++ b/gcc/cfgloop.c
@@ -351,7 +351,7 @@ init_loops_structure (struct function *fn,
 
   /* Dummy loop containing whole function.  */
   root = alloc_loop ();
-  root->num_nodes = n_basic_blocks_for_function (fn);
+  root->num_nodes = n_basic_blocks_for_fn (fn);
   root->latch = EXIT_BLOCK_PTR_FOR_FUNCTION (fn);
   root->header = ENTRY_BLOCK_PTR_FOR_FUNCTION (fn);
   ENTRY_BLOCK_PTR_FOR_FUNCTION (fn)->loop_father = root;
@@ -421,21 +421,21 @@ flow_loops_find (struct loops *loops)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return loops;
 
   /* The root loop node contains all basic-blocks.  */
-  loops->tree_root->num_nodes = n_basic_blocks;
+  loops->tree_root->num_nodes = n_basic_blocks_for_fn (cfun);
 
   /* Compute depth first search order of the CFG so that outer
      natural loops will be found before inner natural loops.  */
-  rc_order = XNEWVEC (int, n_basic_blocks);
+  rc_order = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
   pre_and_rev_post_order_compute (NULL, rc_order, false);
 
   /* Gather all loop headers in reverse completion order and allocate
      loop structures for loops that are not already present.  */
   larray.create (loops->larray->length ());
-  for (b = 0; b < n_basic_blocks - NUM_FIXED_BLOCKS; b++)
+  for (b = 0; b < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; b++)
     {
       basic_block header = BASIC_BLOCK (rc_order[b]);
       if (bb_loop_header_p (header))
@@ -831,7 +831,7 @@ get_loop_body (const struct loop *loop)
     {
       /* There may be blocks unreachable from EXIT_BLOCK, hence we need to
 	 special-case the fake loop that contains the whole function.  */
-      gcc_assert (loop->num_nodes == (unsigned) n_basic_blocks);
+      gcc_assert (loop->num_nodes == (unsigned) n_basic_blocks_for_fn (cfun));
       body[tv++] = loop->header;
       body[tv++] = EXIT_BLOCK_PTR;
       FOR_EACH_BB (bb)
@@ -1367,7 +1367,7 @@ verify_loop_structure (void)
   /* Check the recorded loop father and sizes of loops.  */
   visited = sbitmap_alloc (last_basic_block);
   bitmap_clear (visited);
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
+  bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   FOR_EACH_LOOP (li, loop, LI_FROM_INNERMOST)
     {
       unsigned n;
@@ -1379,7 +1379,7 @@ verify_loop_structure (void)
 	  continue;
 	}
 
-      n = get_loop_body_with_size (loop, bbs, n_basic_blocks);
+      n = get_loop_body_with_size (loop, bbs, n_basic_blocks_for_fn (cfun));
       if (loop->num_nodes != n)
 	{
 	  error ("size of loop %d should be %d, not %d",
diff --git a/gcc/cfgloopmanip.c b/gcc/cfgloopmanip.c
index be876db..3715e08 100644
--- a/gcc/cfgloopmanip.c
+++ b/gcc/cfgloopmanip.c
@@ -69,9 +69,9 @@ find_path (edge e, basic_block **bbs)
   gcc_assert (EDGE_COUNT (e->dest->preds) <= 1);
 
   /* Find bbs in the path.  */
-  *bbs = XNEWVEC (basic_block, n_basic_blocks);
+  *bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   return dfs_enumerate_from (e->dest, 0, rpe_enum_p, *bbs,
-			     n_basic_blocks, e->dest);
+			     n_basic_blocks_for_fn (cfun), e->dest);
 }
 
 /* Fix placement of basic block BB inside loop hierarchy --
@@ -341,7 +341,7 @@ remove_path (edge e)
   nrem = find_path (e, &rem_bbs);
 
   n_bord_bbs = 0;
-  bord_bbs = XNEWVEC (basic_block, n_basic_blocks);
+  bord_bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   seen = sbitmap_alloc (last_basic_block);
   bitmap_clear (seen);
 
@@ -448,8 +448,8 @@ add_loop (struct loop *loop, struct loop *outer)
   flow_loop_tree_node_add (outer, loop);
 
   /* Find its nodes.  */
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
-  n = get_loop_body_with_size (loop, bbs, n_basic_blocks);
+  bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
+  n = get_loop_body_with_size (loop, bbs, n_basic_blocks_for_fn (cfun));
 
   for (i = 0; i < n; i++)
     {
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index d6733a2..c61f0fb 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -361,7 +361,7 @@ rtl_create_basic_block (void *headp, void *endp, basic_block after)
       vec_safe_grow_cleared (basic_block_info, new_size);
     }
 
-  n_basic_blocks++;
+  n_basic_blocks_for_fn (cfun)++;
 
   bb = create_basic_block_structure (head, end, NULL, after);
   bb->aux = NULL;
@@ -500,7 +500,7 @@ make_pass_free_cfg (gcc::context *ctxt)
 rtx
 entry_of_function (void)
 {
-  return (n_basic_blocks > NUM_FIXED_BLOCKS ?
+  return (n_basic_blocks_for_fn (cfun) > NUM_FIXED_BLOCKS ?
 	  BB_HEAD (ENTRY_BLOCK_PTR->next_bb) : get_insns ());
 }
 
@@ -2917,10 +2917,10 @@ rtl_verify_bb_layout (void)
 	curr_bb = NULL;
     }
 
-  if (num_bb_notes != n_basic_blocks - NUM_FIXED_BLOCKS)
+  if (num_bb_notes != n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS)
     internal_error
       ("number of bb notes in insn chain (%d) != n_basic_blocks (%d)",
-       num_bb_notes, n_basic_blocks);
+       num_bb_notes, n_basic_blocks_for_fn (cfun));
 
    return err;
 }
@@ -4751,7 +4751,7 @@ rtl_flow_call_edges_add (sbitmap blocks)
   int last_bb = last_basic_block;
   bool check_last_block = false;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return 0;
 
   if (! blocks)
diff --git a/gcc/config/s390/s390.c b/gcc/config/s390/s390.c
index f0d6a59..5a13412 100644
--- a/gcc/config/s390/s390.c
+++ b/gcc/config/s390/s390.c
@@ -7963,7 +7963,7 @@ s390_optimize_nonescaping_tx (void)
   if (!cfun->machine->tbegin_p)
     return;
 
-  for (bb_index = 0; bb_index < n_basic_blocks; bb_index++)
+  for (bb_index = 0; bb_index < n_basic_blocks_for_fn (cfun); bb_index++)
     {
       bb = BASIC_BLOCK (bb_index);
 
diff --git a/gcc/config/spu/spu.c b/gcc/config/spu/spu.c
index 38c441d..6f56e48 100644
--- a/gcc/config/spu/spu.c
+++ b/gcc/config/spu/spu.c
@@ -2469,13 +2469,13 @@ spu_machine_dependent_reorg (void)
   compact_blocks ();
 
   spu_bb_info =
-    (struct spu_bb_info *) xcalloc (n_basic_blocks,
+    (struct spu_bb_info *) xcalloc (n_basic_blocks_for_fn (cfun),
 				    sizeof (struct spu_bb_info));
 
   /* We need exact insn addresses and lengths.  */
   shorten_branches (get_insns ());
 
-  for (i = n_basic_blocks - 1; i >= 0; i--)
+  for (i = n_basic_blocks_for_fn (cfun) - 1; i >= 0; i--)
     {
       bb = BASIC_BLOCK (i);
       branch = 0;
diff --git a/gcc/coverage.c b/gcc/coverage.c
index 9b0fc8b..ce87e3e 100644
--- a/gcc/coverage.c
+++ b/gcc/coverage.c
@@ -584,7 +584,7 @@ unsigned
 coverage_compute_cfg_checksum (void)
 {
   basic_block bb;
-  unsigned chksum = n_basic_blocks;
+  unsigned chksum = n_basic_blocks_for_fn (cfun);
 
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/cprop.c b/gcc/cprop.c
index 358fca9..78cfeba 100644
--- a/gcc/cprop.c
+++ b/gcc/cprop.c
@@ -1729,24 +1729,25 @@ is_too_expensive (const char *pass)
      which have a couple switch statements.  Rather than simply
      threshold the number of blocks, uses something with a more
      graceful degradation.  */
-  if (n_edges > 20000 + n_basic_blocks * 4)
+  if (n_edges > 20000 + n_basic_blocks_for_fn (cfun) * 4)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d edges/basic block",
-	       pass, n_basic_blocks, n_edges / n_basic_blocks);
+	       pass, n_basic_blocks_for_fn (cfun),
+	       n_edges / n_basic_blocks_for_fn (cfun));
 
       return true;
     }
 
   /* If allocating memory for the cprop bitmap would take up too much
      storage it's better just to disable the optimization.  */
-  if ((n_basic_blocks
+  if ((n_basic_blocks_for_fn (cfun)
        * SBITMAP_SET_SIZE (max_reg_num ())
        * sizeof (SBITMAP_ELT_TYPE)) > MAX_GCSE_MEMORY)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d registers",
-	       pass, n_basic_blocks, max_reg_num ());
+	       pass, n_basic_blocks_for_fn (cfun), max_reg_num ());
 
       return true;
     }
@@ -1763,7 +1764,7 @@ one_cprop_pass (void)
   int changed = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_ ("const/copy propagation disabled")))
     return 0;
 
@@ -1873,7 +1874,8 @@ one_cprop_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "CPROP of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), n_basic_blocks_for_fn (cfun),
+	       bytes_used);
       fprintf (dump_file, "%d local const props, %d local copy props, ",
 	       local_const_prop_count, local_copy_prop_count);
       fprintf (dump_file, "%d global const props, %d global copy props\n\n",
diff --git a/gcc/df-core.c b/gcc/df-core.c
index deea755..20d6c4e 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -1097,8 +1097,8 @@ df_worklist_dataflow_doublequeue (struct dataflow *dataflow,
     fprintf (dump_file, "df_worklist_dataflow_doublequeue:"
 	     "n_basic_blocks %d n_edges %d"
 	     " count %d (%5.2g)\n",
-	     n_basic_blocks, n_edges,
-	     dcount, dcount / (float)n_basic_blocks);
+	     n_basic_blocks_for_fn (cfun), n_edges,
+	     dcount, dcount / (float)n_basic_blocks_for_fn (cfun));
 }
 
 /* Worklist-based dataflow solver. It uses sbitmap as a worklist,
@@ -1606,7 +1606,7 @@ df_compact_blocks (void)
       i++;
     }
 
-  gcc_assert (i == n_basic_blocks);
+  gcc_assert (i == n_basic_blocks_for_fn (cfun));
 
   for (; i < last_basic_block; i++)
     SET_BASIC_BLOCK (i, NULL);
@@ -1714,7 +1714,7 @@ static int *
 df_compute_cfg_image (void)
 {
   basic_block bb;
-  int size = 2 + (2 * n_basic_blocks);
+  int size = 2 + (2 * n_basic_blocks_for_fn (cfun));
   int i;
   int * map;
 
diff --git a/gcc/dominance.c b/gcc/dominance.c
index 569f1f4..6530109 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -146,7 +146,7 @@ static void
 init_dom_info (struct dom_info *di, enum cdi_direction dir)
 {
   /* We need memory for n_basic_blocks nodes.  */
-  unsigned int num = n_basic_blocks;
+  unsigned int num = n_basic_blocks_for_fn (cfun);
   init_ar (di->dfs_parent, TBB, num, 0);
   init_ar (di->path_min, TBB, num, i);
   init_ar (di->key, TBB, num, i);
@@ -233,7 +233,7 @@ calc_dfs_tree_nonrec (struct dom_info *di, basic_block bb, bool reverse)
   /* Ending block.  */
   basic_block ex_block;
 
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Initialize our border blocks, and the first edge.  */
@@ -394,7 +394,7 @@ calc_dfs_tree (struct dom_info *di, bool reverse)
   di->nodes = di->dfsnum - 1;
 
   /* This aborts e.g. when there is _no_ path from ENTRY to EXIT at all.  */
-  gcc_assert (di->nodes == (unsigned int) n_basic_blocks - 1);
+  gcc_assert (di->nodes == (unsigned int) n_basic_blocks_for_fn (cfun) - 1);
 }
 
 /* Compress the path from V to the root of its set and update path_min at the
@@ -652,7 +652,7 @@ calculate_dominance_info (enum cdi_direction dir)
 	{
 	  b->dom[dir_index] = et_new_tree (b);
 	}
-      n_bbs_in_dom_tree[dir_index] = n_basic_blocks;
+      n_bbs_in_dom_tree[dir_index] = n_basic_blocks_for_fn (cfun);
 
       init_dom_info (&di, dir);
       calc_dfs_tree (&di, reverse);
diff --git a/gcc/domwalk.c b/gcc/domwalk.c
index 4816b4c..4c7354e 100644
--- a/gcc/domwalk.c
+++ b/gcc/domwalk.c
@@ -150,13 +150,14 @@ void
 dom_walker::walk (basic_block bb)
 {
   basic_block dest;
-  basic_block *worklist = XNEWVEC (basic_block, n_basic_blocks * 2);
+  basic_block *worklist = XNEWVEC (basic_block,
+				   n_basic_blocks_for_fn (cfun) * 2);
   int sp = 0;
   int *postorder, postorder_num;
 
   if (m_dom_direction == CDI_DOMINATORS)
     {
-      postorder = XNEWVEC (int, n_basic_blocks);
+      postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
       postorder_num = inverted_post_order_compute (postorder);
       bb_postorder = XNEWVEC (int, last_basic_block);
       for (int i = 0; i < postorder_num; ++i)
diff --git a/gcc/function.c b/gcc/function.c
index a36f152..f6e5472 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -4039,7 +4039,7 @@ generate_setjmp_warnings (void)
 {
   bitmap setjmp_crosses = regstat_get_setjmp_crosses ();
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS
       || bitmap_empty_p (setjmp_crosses))
     return;
 
@@ -6026,7 +6026,7 @@ thread_prologue_and_epilogue_insns (void)
       /* Find the set of basic blocks that require a stack frame,
 	 and blocks that are too big to be duplicated.  */
 
-      vec.create (n_basic_blocks);
+      vec.create (n_basic_blocks_for_fn (cfun));
 
       CLEAR_HARD_REG_SET (set_up_by_prologue.set);
       add_to_hard_reg_set (&set_up_by_prologue.set, Pmode,
diff --git a/gcc/fwprop.c b/gcc/fwprop.c
index d08710c..da40a67 100644
--- a/gcc/fwprop.c
+++ b/gcc/fwprop.c
@@ -289,7 +289,7 @@ build_single_def_use_links (void)
   reg_defs.create (max_reg_num ());
   reg_defs.safe_grow_cleared (max_reg_num ());
 
-  reg_defs_stack.create (n_basic_blocks * 10);
+  reg_defs_stack.create (n_basic_blocks_for_fn (cfun) * 10);
   local_md = BITMAP_ALLOC (NULL);
   local_lr = BITMAP_ALLOC (NULL);
 
diff --git a/gcc/gcse.c b/gcc/gcse.c
index 571e878..5ed99bd 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -2662,7 +2662,7 @@ one_pre_gcse_pass (void)
   gcse_create_count = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_("PRE disabled")))
     return 0;
 
@@ -2708,7 +2708,8 @@ one_pre_gcse_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "PRE GCSE of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), n_basic_blocks_for_fn (cfun),
+	       bytes_used);
       fprintf (dump_file, "%d substs, %d insns created\n",
 	       gcse_subst_count, gcse_create_count);
     }
@@ -3591,7 +3592,7 @@ one_code_hoisting_pass (void)
   gcse_create_count = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_("GCSE disabled")))
     return 0;
 
@@ -3642,7 +3643,8 @@ one_code_hoisting_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "HOIST of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), n_basic_blocks_for_fn (cfun),
+	       bytes_used);
       fprintf (dump_file, "%d substs, %d insns created\n",
 	       gcse_subst_count, gcse_create_count);
     }
@@ -4067,24 +4069,25 @@ is_too_expensive (const char *pass)
      which have a couple switch statements.  Rather than simply
      threshold the number of blocks, uses something with a more
      graceful degradation.  */
-  if (n_edges > 20000 + n_basic_blocks * 4)
+  if (n_edges > 20000 + n_basic_blocks_for_fn (cfun) * 4)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d edges/basic block",
-	       pass, n_basic_blocks, n_edges / n_basic_blocks);
+	       pass, n_basic_blocks_for_fn (cfun),
+	       n_edges / n_basic_blocks_for_fn (cfun));
 
       return true;
     }
 
   /* If allocating memory for the dataflow bitmaps would take up too much
      storage it's better just to disable the optimization.  */
-  if ((n_basic_blocks
+  if ((n_basic_blocks_for_fn (cfun)
        * SBITMAP_SET_SIZE (max_reg_num ())
        * sizeof (SBITMAP_ELT_TYPE)) > MAX_GCSE_MEMORY)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d registers",
-	       pass, n_basic_blocks, max_reg_num ());
+	       pass, n_basic_blocks_for_fn (cfun), max_reg_num ());
 
       return true;
     }
diff --git a/gcc/graph.c b/gcc/graph.c
index 5c890e5..1dc9dbc 100644
--- a/gcc/graph.c
+++ b/gcc/graph.c
@@ -153,7 +153,7 @@ draw_cfg_node_succ_edges (pretty_printer *pp, int funcdef_no, basic_block bb)
 static void
 draw_cfg_nodes_no_loops (pretty_printer *pp, struct function *fun)
 {
-  int *rpo = XNEWVEC (int, n_basic_blocks_for_function (fun));
+  int *rpo = XNEWVEC (int, n_basic_blocks_for_fn (fun));
   int i, n;
   sbitmap visited;
 
@@ -161,8 +161,8 @@ draw_cfg_nodes_no_loops (pretty_printer *pp, struct function *fun)
   bitmap_clear (visited);
 
   n = pre_and_rev_post_order_compute_fn (fun, NULL, rpo, true);
-  for (i = n_basic_blocks_for_function (fun) - n;
-       i < n_basic_blocks_for_function (fun); i++)
+  for (i = n_basic_blocks_for_fn (fun) - n;
+       i < n_basic_blocks_for_fn (fun); i++)
     {
       basic_block bb = BASIC_BLOCK (rpo[i]);
       draw_cfg_node (pp, fun->funcdef_no, bb);
@@ -170,7 +170,7 @@ draw_cfg_nodes_no_loops (pretty_printer *pp, struct function *fun)
     }
   free (rpo);
 
-  if (n != n_basic_blocks_for_function (fun))
+  if (n != n_basic_blocks_for_fn (fun))
     {
       /* Some blocks are unreachable.  We still want to dump them.  */
       basic_block bb;
diff --git a/gcc/graphite.c b/gcc/graphite.c
index 176c47c..ea54188 100644
--- a/gcc/graphite.c
+++ b/gcc/graphite.c
@@ -207,7 +207,8 @@ graphite_initialize (isl_ctx *ctx)
   if (number_of_loops (cfun) <= 1
       /* FIXME: This limit on the number of basic blocks of a function
 	 should be removed when the SCOP detection is faster.  */
-      || n_basic_blocks > PARAM_VALUE (PARAM_GRAPHITE_MAX_BBS_PER_FUNCTION))
+      || (n_basic_blocks_for_fn (cfun) >
+	  PARAM_VALUE (PARAM_GRAPHITE_MAX_BBS_PER_FUNCTION)))
     {
       if (dump_file && (dump_flags & TDF_DETAILS))
 	print_global_statistics (dump_file);
diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index 728d51b..cbda61c 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -6754,7 +6754,7 @@ haifa_sched_init (void)
      whole function.  */
   {
     bb_vec_t bbs;
-    bbs.create (n_basic_blocks);
+    bbs.create (n_basic_blocks_for_fn (cfun));
     basic_block bb;
 
     sched_init_bbs ();
diff --git a/gcc/ipa-inline-analysis.c b/gcc/ipa-inline-analysis.c
index 4458723..3658814 100644
--- a/gcc/ipa-inline-analysis.c
+++ b/gcc/ipa-inline-analysis.c
@@ -2395,7 +2395,7 @@ estimate_function_body_sizes (struct cgraph_node *node, bool early)
   if (parms_info)
     compute_bb_predicates (node, parms_info, info);
   gcc_assert (cfun == my_function);
-  order = XNEWVEC (int, n_basic_blocks);
+  order = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
   nblocks = pre_and_rev_post_order_compute (NULL, order, false);
   for (n = 0; n < nblocks; n++)
     {
diff --git a/gcc/ipa-utils.c b/gcc/ipa-utils.c
index 8410816..dff4c72 100644
--- a/gcc/ipa-utils.c
+++ b/gcc/ipa-utils.c
@@ -700,8 +700,8 @@ ipa_merge_profiles (struct cgraph_node *dst,
   cgraph_get_body (dst);
   srccfun = DECL_STRUCT_FUNCTION (src->decl);
   dstcfun = DECL_STRUCT_FUNCTION (dst->decl);
-  if (n_basic_blocks_for_function (srccfun)
-      != n_basic_blocks_for_function (dstcfun))
+  if (n_basic_blocks_for_fn (srccfun)
+      != n_basic_blocks_for_fn (dstcfun))
     {
       if (cgraph_dump_file)
 	fprintf (cgraph_dump_file,
diff --git a/gcc/ira-build.c b/gcc/ira-build.c
index ed51376..ca6f64d 100644
--- a/gcc/ira-build.c
+++ b/gcc/ira-build.c
@@ -3496,7 +3496,7 @@ ira_build (void)
 	}
       fprintf (ira_dump_file, "  regions=%d, blocks=%d, points=%d\n",
 	       current_loops == NULL ? 1 : number_of_loops (cfun),
-	       n_basic_blocks, ira_max_point);
+	       n_basic_blocks_for_fn (cfun), ira_max_point);
       fprintf (ira_dump_file,
 	       "    allocnos=%d (big %d), copies=%d, conflicts=%d, ranges=%d\n",
 	       ira_allocnos_num, nr_big, ira_copies_num, n, nr);
diff --git a/gcc/lcm.c b/gcc/lcm.c
index c13d2a6..6266d48 100644
--- a/gcc/lcm.c
+++ b/gcc/lcm.c
@@ -101,7 +101,7 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
   /* Allocate a worklist array/queue.  Entries are only added to the
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
-  qin = qout = worklist = XNEWVEC (basic_block, n_basic_blocks);
+  qin = qout = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
 
   /* We want a maximal solution, so make an optimistic initialization of
      ANTIN.  */
@@ -116,8 +116,8 @@ compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
     }
 
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
+  qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   /* Mark blocks which are predecessors of the exit block so that we
      can easily identify them below.  */
@@ -254,7 +254,7 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
   qin = qout = worklist
-    = XNEWVEC (basic_block, n_basic_blocks);
+    = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
 
   /* Initialize a mapping from each edge to its index.  */
   for (i = 0; i < num_edges; i++)
@@ -290,8 +290,8 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
   /* Note that we do not use the last allocated element for our queue,
      as EXIT_BLOCK is never inserted into it. */
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
+  qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   /* Iterate until the worklist is empty.  */
   while (qlen)
@@ -481,7 +481,7 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
   qin = qout = worklist =
-    XNEWVEC (basic_block, n_basic_blocks - NUM_FIXED_BLOCKS);
+    XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
 
   /* We want a maximal solution.  */
   bitmap_vector_ones (avout, last_basic_block);
@@ -495,8 +495,8 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
     }
 
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS];
+  qlen = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   /* Mark blocks which are successors of the entry block so that we
      can easily identify them below.  */
@@ -610,7 +610,7 @@ compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
   /* Allocate a worklist array/queue.  Entries are only added to the
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
   /* Initialize NEARER for each edge and build a mapping from an edge to
      its index.  */
diff --git a/gcc/lra-lives.c b/gcc/lra-lives.c
index f3bad97..2839c5c 100644
--- a/gcc/lra-lives.c
+++ b/gcc/lra-lives.c
@@ -998,7 +998,7 @@ lra_create_live_ranges (bool all_p)
   lra_point_freq = point_freq_vec.address ();
   int *post_order_rev_cfg = XNEWVEC (int, last_basic_block);
   int n_blocks_inverted = inverted_post_order_compute (post_order_rev_cfg);
-  lra_assert (n_blocks_inverted == n_basic_blocks);
+  lra_assert (n_blocks_inverted == n_basic_blocks_for_fn (cfun));
   for (i = n_blocks_inverted - 1; i >= 0; --i)
     {
       bb = BASIC_BLOCK (post_order_rev_cfg[i]);
diff --git a/gcc/lra.c b/gcc/lra.c
index 1aea599..3c8b71d 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -2059,7 +2059,7 @@ has_nonexceptional_receiver (void)
     return true;
 
   /* First determine which blocks can reach exit via normal paths.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
   FOR_EACH_BB (bb)
     bb->flags &= ~BB_REACHABLE;
diff --git a/gcc/lto-streamer-in.c b/gcc/lto-streamer-in.c
index d4a52a7..fa537b2 100644
--- a/gcc/lto-streamer-in.c
+++ b/gcc/lto-streamer-in.c
@@ -586,7 +586,7 @@ make_new_block (struct function *fn, unsigned int index)
   basic_block bb = alloc_block ();
   bb->index = index;
   SET_BASIC_BLOCK_FOR_FUNCTION (fn, index, bb);
-  n_basic_blocks_for_function (fn)++;
+  n_basic_blocks_for_fn (fn)++;
   return bb;
 }
 
diff --git a/gcc/mcf.c b/gcc/mcf.c
index 52020b8..e0e40d8 100644
--- a/gcc/mcf.c
+++ b/gcc/mcf.c
@@ -471,12 +471,12 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
   int fnum_edges;
 
   /* Each basic_block will be split into 2 during vertex transformation.  */
-  int fnum_vertices_after_transform =  2 * n_basic_blocks;
-  int fnum_edges_after_transform = n_edges + n_basic_blocks;
+  int fnum_vertices_after_transform =  2 * n_basic_blocks_for_fn (cfun);
+  int fnum_edges_after_transform = n_edges + n_basic_blocks_for_fn (cfun);
 
   /* Count the new SOURCE and EXIT vertices to be added.  */
   int fmax_num_vertices =
-    fnum_vertices_after_transform + n_edges + n_basic_blocks + 2;
+    fnum_vertices_after_transform + n_edges + n_basic_blocks_for_fn (cfun) + 2;
 
   /* In create_fixup_graph: Each basic block and edge can be split into 3
      edges. Number of balance edges = n_basic_blocks. So after
@@ -486,10 +486,10 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
      max_edges = 2 * (4 * n_basic_blocks + 3 * n_edges)
      = 8 * n_basic_blocks + 6 * n_edges
      < 8 * n_basic_blocks + 8 * n_edges.  */
-  int fmax_num_edges = 8 * (n_basic_blocks + n_edges);
+  int fmax_num_edges = 8 * (n_basic_blocks_for_fn (cfun) + n_edges);
 
   /* Initial num of vertices in the fixup graph.  */
-  fixup_graph->num_vertices = n_basic_blocks;
+  fixup_graph->num_vertices = n_basic_blocks_for_fn (cfun);
 
   /* Fixup graph vertex list.  */
   fixup_graph->vertex_list =
@@ -508,7 +508,8 @@ create_fixup_graph (fixup_graph_type *fixup_graph)
   FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
     total_vertex_weight += bb->count;
 
-  sqrt_avg_vertex_weight = mcf_sqrt (total_vertex_weight / n_basic_blocks);
+  sqrt_avg_vertex_weight = mcf_sqrt (total_vertex_weight /
+				     n_basic_blocks_for_fn (cfun));
 
   k_pos = K_POS (sqrt_avg_vertex_weight);
   k_neg = K_NEG (sqrt_avg_vertex_weight);
diff --git a/gcc/profile.c b/gcc/profile.c
index 7118ac8..fb547ad 100644
--- a/gcc/profile.c
+++ b/gcc/profile.c
@@ -1156,9 +1156,9 @@ branch_prob (void)
 	num_instrumented++;
     }
 
-  total_num_blocks += n_basic_blocks;
+  total_num_blocks += n_basic_blocks_for_fn (cfun);
   if (dump_file)
-    fprintf (dump_file, "%d basic blocks\n", n_basic_blocks);
+    fprintf (dump_file, "%d basic blocks\n", n_basic_blocks_for_fn (cfun));
 
   total_num_edges += num_edges;
   if (dump_file)
@@ -1187,7 +1187,7 @@ branch_prob (void)
 
       /* Basic block flags */
       offset = gcov_write_tag (GCOV_TAG_BLOCKS);
-      for (i = 0; i != (unsigned) (n_basic_blocks); i++)
+      for (i = 0; i != (unsigned) (n_basic_blocks_for_fn (cfun)); i++)
 	gcov_write_unsigned (0);
       gcov_write_length (offset);
 
diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c
index 1917c46..3740934 100644
--- a/gcc/reg-stack.c
+++ b/gcc/reg-stack.c
@@ -3080,7 +3080,7 @@ convert_regs_2 (basic_block block)
      is only processed after all its predecessors.  The number of predecessors
      of every block has already been computed.  */
 
-  stack = XNEWVEC (basic_block, n_basic_blocks);
+  stack = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   sp = stack;
 
   *sp++ = block;
diff --git a/gcc/regrename.c b/gcc/regrename.c
index 5b2c857..5e86fa5 100644
--- a/gcc/regrename.c
+++ b/gcc/regrename.c
@@ -672,7 +672,7 @@ regrename_analyze (bitmap bb_mask)
   n_bbs = pre_and_rev_post_order_compute (NULL, inverse_postorder, false);
 
   /* Gather some information about the blocks in this function.  */
-  rename_info = XCNEWVEC (struct bb_rename_info, n_basic_blocks);
+  rename_info = XCNEWVEC (struct bb_rename_info, n_basic_blocks_for_fn (cfun));
   i = 0;
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/reload1.c b/gcc/reload1.c
index 204685d..077ee76 100644
--- a/gcc/reload1.c
+++ b/gcc/reload1.c
@@ -611,7 +611,7 @@ has_nonexceptional_receiver (void)
     return true;
 
   /* First determine which blocks can reach exit via normal paths.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) + 1);
 
   FOR_EACH_BB (bb)
     bb->flags &= ~BB_REACHABLE;
diff --git a/gcc/reorg.c b/gcc/reorg.c
index e9aa889..fe6a751 100644
--- a/gcc/reorg.c
+++ b/gcc/reorg.c
@@ -3642,7 +3642,7 @@ dbr_schedule (rtx first)
 
   /* If the current function has no insns other than the prologue and
      epilogue, then do not try to fill any delay slots.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return;
 
   /* Find the highest INSN_UID and allocate and initialize our map from
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index 8496014..287b826 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -3963,7 +3963,7 @@ sched_deps_init (bool global_p)
 {
   /* Average number of insns in the basic block.
      '+ 1' is used to make it nonzero.  */
-  int insns_in_block = sched_max_luid / n_basic_blocks + 1;
+  int insns_in_block = sched_max_luid / n_basic_blocks_for_fn (cfun) + 1;
 
   init_deps_data_vector ();
 
diff --git a/gcc/sched-ebb.c b/gcc/sched-ebb.c
index b70e071..8d23e33 100644
--- a/gcc/sched-ebb.c
+++ b/gcc/sched-ebb.c
@@ -625,7 +625,7 @@ schedule_ebbs (void)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return;
 
   if (profile_info && flag_branch_probabilities)
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index b2a7dbd..20c29c5 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -793,7 +793,7 @@ haifa_find_rgns (void)
       /* Second traversal:find reducible inner loops and topologically sort
 	 block of each region.  */
 
-      queue = XNEWVEC (int, n_basic_blocks);
+      queue = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
 
       extend_regions_p = PARAM_VALUE (PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS) > 0;
       if (extend_regions_p)
@@ -1153,7 +1153,7 @@ void
 extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 {
   int *order, i, rescan = 0, idx = *idxp, iter = 0, max_iter, *max_hdr;
-  int nblocks = n_basic_blocks - NUM_FIXED_BLOCKS;
+  int nblocks = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   max_iter = PARAM_VALUE (PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS);
 
@@ -3115,7 +3115,7 @@ sched_rgn_init (bool single_blocks_p)
 
   /* Compute regions for scheduling.  */
   if (single_blocks_p
-      || n_basic_blocks == NUM_FIXED_BLOCKS + 1
+      || n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS + 1
       || !flag_schedule_interblock
       || is_cfg_nonregular ())
     {
@@ -3139,7 +3139,7 @@ sched_rgn_init (bool single_blocks_p)
 	free_dominance_info (CDI_DOMINATORS);
     }
 
-  gcc_assert (0 < nr_regions && nr_regions <= n_basic_blocks);
+  gcc_assert (0 < nr_regions && nr_regions <= n_basic_blocks_for_fn (cfun));
 
   RGN_BLOCKS (nr_regions) = (RGN_BLOCKS (nr_regions - 1) +
 			     RGN_NR_BLOCKS (nr_regions - 1));
@@ -3375,7 +3375,7 @@ schedule_insns (void)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return;
 
   rgn_setup_common_sched_info ();
@@ -3421,8 +3421,8 @@ rgn_add_remove_insn (rtx insn, int remove_p)
 void
 extend_regions (void)
 {
-  rgn_table = XRESIZEVEC (region, rgn_table, n_basic_blocks);
-  rgn_bb_table = XRESIZEVEC (int, rgn_bb_table, n_basic_blocks);
+  rgn_table = XRESIZEVEC (region, rgn_table, n_basic_blocks_for_fn (cfun));
+  rgn_bb_table = XRESIZEVEC (int, rgn_bb_table, n_basic_blocks_for_fn (cfun));
   block_to_bb = XRESIZEVEC (int, block_to_bb, last_basic_block);
   containing_rgn = XRESIZEVEC (int, containing_rgn, last_basic_block);
 }
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index 4eb27c5..90bf1e2 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -3649,7 +3649,7 @@ sel_recompute_toporder (void)
   int i, n, rgn;
   int *postorder, n_blocks;
 
-  postorder = XALLOCAVEC (int, n_basic_blocks);
+  postorder = XALLOCAVEC (int, n_basic_blocks_for_fn (cfun));
   n_blocks = post_order_compute (postorder, false, false);
 
   rgn = CONTAINING_RGN (BB_TO_BLOCK (0));
@@ -4912,10 +4912,10 @@ recompute_rev_top_order (void)
                                         rev_top_order_index_len);
     }
 
-  postorder = XNEWVEC (int, n_basic_blocks);
+  postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
 
   n_blocks = post_order_compute (postorder, true, false);
-  gcc_assert (n_basic_blocks == n_blocks);
+  gcc_assert (n_basic_blocks_for_fn (cfun) == n_blocks);
 
   /* Build reverse function: for each basic block with BB->INDEX == K
      rev_top_order_index[K] is it's reverse topological sort number.  */
diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c
index 08fdc77..c2d4185 100644
--- a/gcc/sel-sched.c
+++ b/gcc/sel-sched.c
@@ -7764,7 +7764,7 @@ run_selective_scheduling (void)
 {
   int rgn;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return;
 
   sel_global_init ();
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index 68f293c..ffbeed2 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -848,7 +848,7 @@ remove_reachable_equiv_notes (basic_block bb, struct st_expr *smexpr)
   rtx last, insn, note;
   rtx mem = smexpr->pattern;
 
-  stack = XNEWVEC (edge_iterator, n_basic_blocks);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun));
   sp = 0;
   ei = ei_start (bb->succs);
 
@@ -1208,7 +1208,7 @@ one_store_motion_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "STORE_MOTION of %s, %d basic blocks, ",
-	       current_function_name (), n_basic_blocks);
+	       current_function_name (), n_basic_blocks_for_fn (cfun));
       fprintf (dump_file, "%d insns deleted, %d insns created\n",
 	       n_stores_deleted, n_stores_created);
     }
diff --git a/gcc/tracer.c b/gcc/tracer.c
index 86557fe..400ee46 100644
--- a/gcc/tracer.c
+++ b/gcc/tracer.c
@@ -226,7 +226,7 @@ static bool
 tail_duplicate (void)
 {
   fibnode_t *blocks = XCNEWVEC (fibnode_t, last_basic_block);
-  basic_block *trace = XNEWVEC (basic_block, n_basic_blocks);
+  basic_block *trace = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   int *counts = XNEWVEC (int, last_basic_block);
   int ninsns = 0, nduplicated = 0;
   gcov_type weighted_insns = 0, traced_insns = 0;
@@ -370,7 +370,7 @@ tracer (void)
 {
   bool changed;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (n_basic_blocks_for_fn (cfun) <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   mark_dfs_back_edges ();
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index d646693..b32f8ea 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -170,7 +170,7 @@ init_empty_tree_cfg_for_function (struct function *fn)
   /* Initialize the basic block array.  */
   init_flow (fn);
   profile_status_for_function (fn) = PROFILE_ABSENT;
-  n_basic_blocks_for_function (fn) = NUM_FIXED_BLOCKS;
+  n_basic_blocks_for_fn (fn) = NUM_FIXED_BLOCKS;
   last_basic_block_for_function (fn) = NUM_FIXED_BLOCKS;
   vec_alloc (basic_block_info_for_function (fn), initial_cfg_capacity);
   vec_safe_grow_cleared (basic_block_info_for_function (fn),
@@ -227,12 +227,12 @@ build_gimple_cfg (gimple_seq seq)
     factor_computed_gotos ();
 
   /* Make sure there is always at least one block, even if it's empty.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     create_empty_bb (ENTRY_BLOCK_PTR);
 
   /* Adjust the size of the array.  */
-  if (basic_block_info->length () < (size_t) n_basic_blocks)
-    vec_safe_grow_cleared (basic_block_info, n_basic_blocks);
+  if (basic_block_info->length () < (size_t) n_basic_blocks_for_fn (cfun))
+    vec_safe_grow_cleared (basic_block_info, n_basic_blocks_for_fn (cfun));
 
   /* To speed up statement iterator walks, we first purge dead labels.  */
   cleanup_dead_labels ();
@@ -602,7 +602,7 @@ create_bb (void *h, void *e, basic_block after)
   /* Add the newly created block to the array.  */
   SET_BASIC_BLOCK (last_basic_block, bb);
 
-  n_basic_blocks++;
+  n_basic_blocks_for_fn (cfun)++;
   last_basic_block++;
 
   return bb;
@@ -2100,7 +2100,7 @@ gimple_dump_cfg (FILE *file, int flags)
     {
       dump_function_header (file, current_function_decl, flags);
       fprintf (file, ";; \n%d basic blocks, %d edges, last basic block %d.\n\n",
-	       n_basic_blocks, n_edges, last_basic_block);
+	       n_basic_blocks_for_fn (cfun), n_edges, last_basic_block);
 
       brief_dump_cfg (file, flags | TDF_COMMENT);
       fprintf (file, "\n");
@@ -2135,9 +2135,9 @@ dump_cfg_stats (FILE *file)
   fprintf (file, fmt_str, "", "  instances  ", "used ");
   fprintf (file, "---------------------------------------------------------\n");
 
-  size = n_basic_blocks * sizeof (struct basic_block_def);
+  size = n_basic_blocks_for_fn (cfun) * sizeof (struct basic_block_def);
   total += size;
-  fprintf (file, fmt_str_1, "Basic blocks", n_basic_blocks,
+  fprintf (file, fmt_str_1, "Basic blocks", n_basic_blocks_for_fn (cfun),
 	   SCALE (size), LABEL (size));
 
   num_edges = 0;
@@ -7025,7 +7025,7 @@ dump_function_to_file (tree fndecl, FILE *file, int flags)
       if (!ignore_topmost_bind)
 	fprintf (file, "{\n");
 
-      if (any_var && n_basic_blocks_for_function (fun))
+      if (any_var && n_basic_blocks_for_fn (fun))
 	fprintf (file, "\n");
 
       FOR_EACH_BB_FN (bb, fun)
@@ -7403,7 +7403,7 @@ gimple_flow_call_edges_add (sbitmap blocks)
   int last_bb = last_basic_block;
   bool check_last_block = false;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (n_basic_blocks_for_fn (cfun) == NUM_FIXED_BLOCKS)
     return 0;
 
   if (! blocks)
diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c
index c627d2c..0863f16 100644
--- a/gcc/tree-cfgcleanup.c
+++ b/gcc/tree-cfgcleanup.c
@@ -903,7 +903,7 @@ remove_forwarder_block_with_phi (basic_block bb)
 static unsigned int
 merge_phi_nodes (void)
 {
-  basic_block *worklist = XNEWVEC (basic_block, n_basic_blocks);
+  basic_block *worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
   basic_block *current = worklist;
   basic_block bb;
 
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index 77013b3..3edcaa4 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -4380,7 +4380,7 @@ gimple_expand_calls_inline (basic_block bb, copy_body_data *id)
 static void
 fold_marked_statements (int first, struct pointer_set_t *statements)
 {
-  for (; first < n_basic_blocks; first++)
+  for (; first < n_basic_blocks_for_fn (cfun); first++)
     if (BASIC_BLOCK (first))
       {
         gimple_stmt_iterator gsi;
@@ -4483,7 +4483,7 @@ optimize_inline_calls (tree fn)
 {
   copy_body_data id;
   basic_block bb;
-  int last = n_basic_blocks;
+  int last = n_basic_blocks_for_fn (cfun);
   struct gimplify_ctx gctx;
   bool inlined_p = false;
 
diff --git a/gcc/tree-ssa-ifcombine.c b/gcc/tree-ssa-ifcombine.c
index 73ebfe8..558f15f 100644
--- a/gcc/tree-ssa-ifcombine.c
+++ b/gcc/tree-ssa-ifcombine.c
@@ -677,7 +677,7 @@ tree_ssa_ifcombine (void)
      inner ones, and also that we do not try to visit a removed
      block.  This is opposite of PHI-OPT, because we cascade the
      combining rather than cascading PHIs. */
-  for (i = n_basic_blocks - NUM_FIXED_BLOCKS - 1; i >= 0; i--)
+  for (i = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS - 1; i >= 0; i--)
     {
       basic_block bb = bbs[i];
       gimple stmt = last_stmt (bb);
diff --git a/gcc/tree-ssa-loop-ch.c b/gcc/tree-ssa-loop-ch.c
index b74c56d..4591b91 100644
--- a/gcc/tree-ssa-loop-ch.c
+++ b/gcc/tree-ssa-loop-ch.c
@@ -145,9 +145,9 @@ copy_loop_headers (void)
       return 0;
     }
 
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
-  copied_bbs = XNEWVEC (basic_block, n_basic_blocks);
-  bbs_size = n_basic_blocks;
+  bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
+  copied_bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
+  bbs_size = n_basic_blocks_for_fn (cfun);
 
   FOR_EACH_LOOP (li, loop, 0)
     {
diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
index 2283b5b..aff9573 100644
--- a/gcc/tree-ssa-loop-im.c
+++ b/gcc/tree-ssa-loop-im.c
@@ -1591,7 +1591,7 @@ analyze_memory_references (void)
   /* Collect all basic-blocks in loops and sort them after their
      loops postorder.  */
   i = 0;
-  bbs = XNEWVEC (basic_block, n_basic_blocks - NUM_FIXED_BLOCKS);
+  bbs = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
   FOR_EACH_BB (bb)
     if (bb->loop_father != current_loops->tree_root)
       bbs[i++] = bb;
diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
index 2bb2253..0f3da98 100644
--- a/gcc/tree-ssa-loop-manip.c
+++ b/gcc/tree-ssa-loop-manip.c
@@ -191,7 +191,7 @@ compute_live_loop_exits (bitmap live_exits, bitmap use_blocks,
   /* Normally the work list size is bounded by the number of basic
      blocks in the largest loop.  We don't know this number, but we
      can be fairly sure that it will be relatively small.  */
-  worklist.create (MAX (8, n_basic_blocks / 128));
+  worklist.create (MAX (8, n_basic_blocks_for_fn (cfun) / 128));
 
   EXECUTE_IF_SET_IN_BITMAP (use_blocks, 0, i, bi)
     {
diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c
index 9a29411..81aa843 100644
--- a/gcc/tree-ssa-math-opts.c
+++ b/gcc/tree-ssa-math-opts.c
@@ -510,7 +510,7 @@ execute_cse_reciprocals (void)
 
   occ_pool = create_alloc_pool ("dominators for recip",
 				sizeof (struct occurrence),
-				n_basic_blocks / 3 + 1);
+				n_basic_blocks_for_fn (cfun) / 3 + 1);
 
   memset (&reciprocal_stats, 0, sizeof (reciprocal_stats));
   calculate_dominance_info (CDI_DOMINATORS);
diff --git a/gcc/tree-ssa-phiopt.c b/gcc/tree-ssa-phiopt.c
index ef114a0..b000ec6 100644
--- a/gcc/tree-ssa-phiopt.c
+++ b/gcc/tree-ssa-phiopt.c
@@ -335,7 +335,7 @@ tree_ssa_phiopt_worker (bool do_store_elim, bool do_hoist_loads)
      outer ones, and also that we do not try to visit a removed
      block.  */
   bb_order = single_pred_before_succ_order ();
-  n = n_basic_blocks - NUM_FIXED_BLOCKS;
+  n = n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS;
 
   for (i = 0; i < n; i++)
     {
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index 1f5ff23..22a95bb 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -3721,7 +3721,7 @@ compute_avail (void)
     }
 
   /* Allocate the worklist.  */
-  worklist = XNEWVEC (basic_block, n_basic_blocks);
+  worklist = XNEWVEC (basic_block, n_basic_blocks_for_fn (cfun));
 
   /* Seed the algorithm by putting the dominator children of the entry
      block on the worklist.  */
@@ -4652,7 +4652,7 @@ init_pre (void)
   connect_infinite_loops_to_exit ();
   memset (&pre_stats, 0, sizeof (pre_stats));
 
-  postorder = XNEWVEC (int, n_basic_blocks);
+  postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun));
   postorder_num = inverted_post_order_compute (postorder);
 
   alloc_aux_for_blocks (sizeof (struct bb_bitmap_sets));
@@ -4728,7 +4728,7 @@ do_pre (void)
      fixed, don't run it when he have an incredibly large number of
      bb's.  If we aren't going to run insert, there is no point in
      computing ANTIC, either, even though it's plenty fast.  */
-  if (n_basic_blocks < 4000)
+  if (n_basic_blocks_for_fn (cfun) < 4000)
     {
       compute_antic ();
       insert ();
diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c
index 538a8ef..a79d1c1 100644
--- a/gcc/tree-ssa-reassoc.c
+++ b/gcc/tree-ssa-reassoc.c
@@ -4535,7 +4535,7 @@ init_reassoc (void)
 {
   int i;
   long rank = 2;
-  int *bbs = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  int *bbs = XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
 
   /* Find the loops, so that we can prevent moving calculations in
      them.  */
@@ -4565,7 +4565,7 @@ init_reassoc (void)
     }
 
   /* Set up rank for each BB  */
-  for (i = 0; i < n_basic_blocks - NUM_FIXED_BLOCKS; i++)
+  for (i = 0; i < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; i++)
     bb_rank[bbs[i]] = ++rank  << 16;
 
   free (bbs);
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index ed4e1db..2930054 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -3972,13 +3972,14 @@ init_scc_vn (void)
   shared_lookup_phiargs.create (0);
   shared_lookup_references.create (0);
   rpo_numbers = XNEWVEC (int, last_basic_block);
-  rpo_numbers_temp = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  rpo_numbers_temp =
+    XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
   pre_and_rev_post_order_compute (NULL, rpo_numbers_temp, false);
 
   /* RPO numbers is an array of rpo ordering, rpo[i] = bb means that
      the i'th block in RPO order is bb.  We want to map bb's to RPO
      numbers, so we need to rearrange this array.  */
-  for (j = 0; j < n_basic_blocks - NUM_FIXED_BLOCKS; j++)
+  for (j = 0; j < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; j++)
     rpo_numbers[rpo_numbers_temp[j]] = j;
 
   XDELETE (rpo_numbers_temp);
diff --git a/gcc/tree-ssa-tail-merge.c b/gcc/tree-ssa-tail-merge.c
index db95ce1..07e7da8 100644
--- a/gcc/tree-ssa-tail-merge.c
+++ b/gcc/tree-ssa-tail-merge.c
@@ -761,11 +761,11 @@ static void
 init_worklist (void)
 {
   alloc_aux_for_blocks (sizeof (struct aux_bb_info));
-  same_succ_htab.create (n_basic_blocks);
+  same_succ_htab.create (n_basic_blocks_for_fn (cfun));
   same_succ_edge_flags = XCNEWVEC (int, last_basic_block);
   deleted_bbs = BITMAP_ALLOC (NULL);
   deleted_bb_preds = BITMAP_ALLOC (NULL);
-  worklist.create (n_basic_blocks);
+  worklist.create (n_basic_blocks_for_fn (cfun));
   find_same_succ ();
 
   if (dump_file && (dump_flags & TDF_DETAILS))
@@ -993,7 +993,7 @@ static vec<bb_cluster> all_clusters;
 static void
 alloc_cluster_vectors (void)
 {
-  all_clusters.create (n_basic_blocks);
+  all_clusters.create (n_basic_blocks_for_fn (cfun));
 }
 
 /* Reset all cluster vectors.  */
diff --git a/gcc/tree-ssa-uncprop.c b/gcc/tree-ssa-uncprop.c
index 5255d7fb..5a65fd9 100644
--- a/gcc/tree-ssa-uncprop.c
+++ b/gcc/tree-ssa-uncprop.c
@@ -192,7 +192,7 @@ associate_equivalences_with_edges (void)
 
 	      /* Now walk over the blocks to determine which ones were
 		 marked as being reached by a useful case label.  */
-	      for (i = 0; i < n_basic_blocks; i++)
+	      for (i = 0; i < n_basic_blocks_for_fn (cfun); i++)
 		{
 		  tree node = info[i];
 
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 8b07f9f..700d42f 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -838,7 +838,7 @@ vt_stack_adjustments (void)
   VTI (ENTRY_BLOCK_PTR)->out.stack_adjust = INCOMING_FRAME_SP_OFFSET;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, n_basic_blocks_for_fn (cfun) + 1);
   sp = 0;
 
   /* Push the first edge on to the stack.  */
@@ -6904,10 +6904,10 @@ vt_find_locations (void)
   timevar_push (TV_VAR_TRACKING_DATAFLOW);
   /* Compute reverse completion order of depth first search of the CFG
      so that the data-flow runs faster.  */
-  rc_order = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  rc_order = XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS);
   bb_order = XNEWVEC (int, last_basic_block);
   pre_and_rev_post_order_compute (NULL, rc_order, false);
-  for (i = 0; i < n_basic_blocks - NUM_FIXED_BLOCKS; i++)
+  for (i = 0; i < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; i++)
     bb_order[rc_order[i]] = i;
   free (rc_order);
 
@@ -10157,7 +10157,8 @@ variable_tracking_main_1 (void)
       return 0;
     }
 
-  if (n_basic_blocks > 500 && n_edges / n_basic_blocks >= 20)
+  if (n_basic_blocks_for_fn (cfun) > 500 &&
+      n_edges / n_basic_blocks_for_fn (cfun) >= 20)
     {
       vt_debug_insns_local (true);
       return 0;

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]