This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [middle-end, patch 7/8] Inlining of indirect calls
- From: Jan Hubicka <jh at suse dot cz>
- To: Martin Jambor <mjambor at suse dot cz>
- Cc: GCC Patches <gcc-patches at gcc dot gnu dot org>, Jan Hubicka <jh at suse dot cz>, Kenneth Zadeck <zadeck at naturalbridge dot com>, Razya Ladelsky <RAZYA at il dot ibm dot com>, Paolo Carlini <paolo dot carlini at oracle dot com>
- Date: Wed, 16 Jul 2008 00:07:05 +0200
- Subject: Re: [middle-end, patch 7/8] Inlining of indirect calls
- References: <20080715194347.569852675@virgil.suse.cz> <20080715194421.450628592@virgil.suse.cz>
> 2008-07-15 Martin Jambor <mjambor@suse.cz>
>
> * ipa-inline.c (cgraph_consider_new_edge_for_inlining): New function.
> (cgraph_decide_recursive_inlining): Call
> ipa_propagate_indirect_call_infos if performing indirect inlining.
> (add_new_indirect_edges_to_heap): New fucntion.
> (cgraph_decide_inlining_of_small_functions): Call
> add_new_indirect_edges_to_heap after recursive inlining when
> performing indirect inlining, call ipa_propagate_indirect_call_infos
> after ordinary inlining in that situation.
> (cgraph_decide_inlining): Call ipa_propagate_indirect_call_infos after
> inlining if performing indirect inlining. Call
> free_all_ipa_structures_after_iinln when doing so too.
> (inline_generate_summary): Do not call
> free_all_ipa_structures_after_iinln here.
>
> * ipa-prop.c: Include fibheap.h.
> (update_jump_functions_after_inlining): New function.
> (print_edge_addition_message): New function.
> (update_call_notes_after_inlining): New function.
> (propagate_info_to_inlined_callees): New function.
> (ipa_propagate_indirect_call_infos): New function.
>
> * ipa-prop.h: Include fibheap.h.
> (struct ipa_param_call_note): New field processed.
>
> * cgraph.h (cgraph_edge): Shrink loop_nest field to 31 bits, add a new
> flag indirect_call.
>
> * cgraphunit.c (verify_cgraph_node): Allow indirect edges not to have
> rediscovered call statements.
>
> * cgraph.c (cgraph_create_edge): Initialize indirect_call to zero.
> (dump_cgraph_node): Dump also the indirect_call flag.
> (cgraph_clone_edge): Copy also the indirect_call flag.
>
> * tree-inline.c (copy_bb): Do not check for fndecls from call
> expressions, check for edge availability when moving clones.
> (get_indirect_callee_fndecl): New function.
> (expand_call_inline): If callee declaration is not apprent from the
> statement, try calling get_indirect_callee_fndecl.
>
>
> +
> +/* Updates the param called notes associated with NODE when CS is being
> + inlined, assuming NODE is (potentially indirectly) inlined into CS->callee.
> + Moreover, if the callee is discovered to be constant, a new cgraph edge is
> + created for it. Finally, if HEAP is non-NULL, such new edges are added to
> + the heap through cgraph_consider_new_edge_for_inlining. */
> +static void
> +update_call_notes_after_inlining (fibheap_t heap, struct cgraph_edge *cs,
> + struct cgraph_node *node)
Hmm, I am not big fan of exposing the heap that is internal to the
particular implementation of inlning heuristic to the outside world.
What about making update_call_notes_after_inlining simply colelcting new
edges into VECtor that is returned by ipa_propagate_indirect_call_infos?
> struct ipa_param_call_note
> {
> + /* Set when we have already found the target to be a compile time constant
> + and turned this into an edge or when the note was found unusable for some
> + reason. */
> + bool processed;
Probably can be better if the structure was alisgned by size of fields.
> /* Index of the parameter that is called. */
> unsigned int formal_id;
> /* Statement that contains the call to the parameter above. */
> @@ -377,6 +382,7 @@ void ipa_count_formal_params (struct cgr
> void ipa_create_param_decls_array (struct cgraph_node *);
> void ipa_detect_param_modifications (struct cgraph_node *);
> void ipa_analyze_params_uses (struct cgraph_node *);
> +void ipa_propagate_indirect_call_infos (fibheap_t, struct cgraph_edge *);
>
> /* Debugging interface. */
> void ipa_print_all_tree_maps (FILE *);
> @@ -385,4 +391,8 @@ void ipa_print_all_param_flags (FILE *);
> void ipa_print_node_jump_functions (FILE *f, struct cgraph_node *node);
> void ipa_print_all_jump_functions (FILE * f);
>
> +/* From ipa-inline.c */
> +void cgraph_consider_new_edge_for_inlining (fibheap_t heap,
> + struct cgraph_edge *edge);
> +
And this can become local to ipa-inline too.
> Index: iinln/gcc/tree-inline.c
> ===================================================================
> --- iinln.orig/gcc/tree-inline.c
> +++ iinln/gcc/tree-inline.c
> @@ -951,7 +951,7 @@ copy_bb (copy_body_data *id, basic_block
> pointer_set_insert (id->statements_to_fold, stmt);
> /* We're duplicating a CALL_EXPR. Find any corresponding
> callgraph edges and update or duplicate them. */
> - if (call && (decl = get_callee_fndecl (call)))
> + if (call)
> {
> struct cgraph_node *node;
> struct cgraph_edge *edge;
> @@ -962,7 +962,8 @@ copy_bb (copy_body_data *id, basic_block
> edge = cgraph_edge (id->src_node, orig_stmt);
> if (edge)
> cgraph_clone_edge (edge, id->dst_node, stmt,
> - REG_BR_PROB_BASE, 1, edge->frequency, true);
> + REG_BR_PROB_BASE, 1,
> + edge->frequency, true);
> break;
>
> case CB_CGE_MOVE_CLONES:
> @@ -971,8 +972,8 @@ copy_bb (copy_body_data *id, basic_block
> node = node->next_clone)
> {
> edge = cgraph_edge (node, orig_stmt);
> - gcc_assert (edge);
> - cgraph_set_call_stmt (edge, stmt);
> + if (edge)
> + cgraph_set_call_stmt (edge, stmt);
You need to suppress -Winline warings and always_inline sorry messages
for indirect calls here.
Otherwise the patch looks fine.
Honza