This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH, pretty-ipa] Nuke call graph edges after inlining in all cases


Hi,

On Sat, Jan 03, 2009 at 02:41:20PM +0100, Jan Hubicka wrote:
> > optimize_inline_calls()  always discards all  edges outgoing  from the
> > function into  which it has previously inlined  other stuff.  However,
> > this function is called from  ipa-inline.c only when there actually is
> > some inlining  (or early inlining) to do.   Consequently, the outgoing
> > edges  are either  stripped off  or  left hanging  there depending  on
> > inlining decisions.
> > 
> > For  the sake  of  consistency,  the following  patch  moves the  edge
> > cleansing  to  the  two  callers  of this  function  and  performs  it
> > unconditionally.
> > 
> > Bootstrapped and tested on x86_64-suse-linux, OK for pretty-ipa?
> 
> It woudl probably make most sense to make this pass manager property
> destroyed by local passes. Early inliner is not only reason why edges
> are wrong, i.e. with -fno-early-inlining I think your patch will make
> worng edges to appear to later local passes....
> It is not very pretty to modify entires of every local pass we have to
> destroy cgraph edges, but I wonder what alternatives we have?
> 


I have discussed this with Honza  on IRC and we came to the conclusion
that at  this time  a simple special-purpose  pass removing  the edges
scheduled right after inlining and  early inlining would do because we
already depend on such passes  for rebuilding the edges and, moreover,
making pass  properties work nicely with three-stage  IPA passes would
be too big a task at this moment.

I have bootstrapped and tested the following patch on x86_64-linux-gnu
without any problems, OK for pretty-ipa?

Thanks,

Martin


2009-01-05  Martin Jambor  <mjambor@suse.cz>

	* tree-inline.c (optimize_inline_calls): Do not call
	cgraph_node_remove_callees.
	* cgraphbuild.c (remove_cgraph_callee_edges): New function.
	(pass_remove_cgraph_callee_edges): New variable.
	* passes.c (init_optimization_passes): Add
	pass_remove_cgraph_callee_edges after early inlining and before all
	late intraprocedural passes.


Index: isra/gcc/tree-inline.c
===================================================================
--- isra.orig/gcc/tree-inline.c
+++ isra/gcc/tree-inline.c
@@ -3544,10 +3544,6 @@ optimize_inline_calls (tree fn)
   /* Renumber the lexical scoping (non-code) blocks consecutively.  */
   number_blocks (fn);
 
-  /* We are not going to maintain the cgraph edges up to date.
-     Kill it so it won't confuse us.  */
-  cgraph_node_remove_callees (id.dst_node);
-
   fold_cond_expr_cond ();
 
   /* It would be nice to check SSA/CFG/statement consistency here, but it is
Index: isra/gcc/cgraphbuild.c
===================================================================
--- isra.orig/gcc/cgraphbuild.c
+++ isra/gcc/cgraphbuild.c
@@ -281,3 +281,30 @@ struct gimple_opt_pass pass_rebuild_cgra
   0,					/* todo_flags_finish */
  }
 };
+
+
+static unsigned int
+remove_cgraph_callee_edges (void)
+{
+  cgraph_node_remove_callees (cgraph_node (current_function_decl));
+  return 0;
+}
+
+struct gimple_opt_pass pass_remove_cgraph_callee_edges =
+{
+ {
+  GIMPLE_PASS,
+  NULL,					/* name */
+  NULL,					/* gate */
+  remove_cgraph_callee_edges,		/* execute */
+  NULL,					/* sub */
+  NULL,					/* next */
+  0,					/* static_pass_number */
+  0,					/* tv_id */
+  0,					/* properties_required */
+  0,					/* properties_provided */
+  0,					/* properties_destroyed */
+  0,					/* todo_flags_start */
+  0,					/* todo_flags_finish */
+ }
+};
Index: isra/gcc/passes.c
===================================================================
--- isra.orig/gcc/passes.c
+++ isra/gcc/passes.c
@@ -551,6 +551,7 @@ init_optimization_passes (void)
 	  struct opt_pass **p = &pass_all_early_optimizations.pass.sub;
 	  NEXT_PASS (pass_rebuild_cgraph_edges);
 	  NEXT_PASS (pass_early_inline);
+	  NEXT_PASS (pass_remove_cgraph_callee_edges);
 	  NEXT_PASS (pass_rename_ssa_copies);
 	  NEXT_PASS (pass_ccp);
 	  NEXT_PASS (pass_forwprop);
@@ -586,6 +587,7 @@ init_optimization_passes (void)
   NEXT_PASS (pass_all_optimizations);
     {
       struct opt_pass **p = &pass_all_optimizations.pass.sub;
+      NEXT_PASS (pass_remove_cgraph_callee_edges);
       /* Initial scalar cleanups before alias computation.
 	 They ensure memory accesses are not indirect wherever possible.  */
       NEXT_PASS (pass_strip_predict_hints);
Index: isra/gcc/tree-pass.h
===================================================================
--- isra.orig/gcc/tree-pass.h
+++ isra/gcc/tree-pass.h
@@ -387,6 +387,7 @@ extern struct gimple_opt_pass pass_uncpr
 extern struct gimple_opt_pass pass_return_slot;
 extern struct gimple_opt_pass pass_reassoc;
 extern struct gimple_opt_pass pass_rebuild_cgraph_edges;
+extern struct gimple_opt_pass pass_remove_cgraph_callee_edges;
 extern struct gimple_opt_pass pass_build_cgraph_edges;
 extern struct gimple_opt_pass pass_reset_cc_flags;
 extern struct gimple_opt_pass pass_local_pure_const;


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]