This is the mail archive of the
`gcc-patches@gcc.gnu.org`
mailing list for the GCC project.

Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|

Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |

Other format: | [Raw text] |

*From*: Diego Novillo <dnovillo at redhat dot com>*To*: Daniel Berlin <dberlin at dberlin dot org>*Cc*: GCC Patches <gcc-patches at gcc dot gnu dot org>*Date*: Mon, 9 May 2005 16:50:20 -0400*Subject*: Re: [PATCH] Structure aliasing analyzer*References*: <1114798329.29362.16.camel@linux.site>

On Fri, Apr 29, 2005 at 02:12:08PM -0400, Daniel Berlin wrote: > I've got a bunch of testcases to verify that the *results* of the > points-to analysis are correct, and i can add tests to verify that the > first alias analysis pass produces the results we want instead of it not > happening until the 4th or 5th, as it does now. > These will be good to have. Cool stuff. We badly need this, but we still have a lot of things to do. Finer alias info will make some of our representational problems even worse. Without going into a pure oracle-based system like the RTL optimizers, we will need to move into a sort of hybrid that gives us the best of both worlds. Clearly, we can't keep exposing everything in the IL like we do today. Some questions and obvious feedback below. I'm still not totally done wrapping my brain around it all, so some of the questions will be about basic algorithmic stuff. - Timings for pass_build_pta on things like DLV, MICO, SPEC, TRAMP and cc1-i-files? - It's a bit disappointing that we don't get better results. If we are getting good alias information right off the bat, the scalar cleanups, redundancy elimination and such should be getting better. Any idea what's getting in the way? - Regardless on how good we do on reducing memory footprint and whatnot, I think we will always want to have different levels of alias information. Perhaps at -O1 we won't want to have points-to information for structure fields. pass_build_pta needs to take that into account. It's not so much the time we spend in pass_build_pta, but the effects of more detailed alias info on the rest of the optimizers. - Can the implementation in its present form totally replace compute_points_to_and_addr_escape? If so, let's get rid of it. No point keeping that hack around. It's fine if you want to do this in more than one stage, but I want to get rid of it for 4.1. - More importantly, can we also subsume compute_flow_insensitive_aliasing? That would seriously rock. This should also be done for 4.1. I may be able to lend a hand if you're pressed for time. > +/* The idea behind this analyzer is to generate set constraints from the > + program, then solve the resulting constraints in order to generate the > + points-to sets. > + [ ... ] > Add pointers to literature. > + /* True if the address of this variable is taken. Needed for > + rountev-chandra. */ > ^^^^^^^^^^^^^^^ They get their due credit when we reference their paper. Let's use the generic algorithm name, otherwise. > +/* Variable that represents readonly memory. */ > s/readonly/read-only/ > +/* Variable that represents integers. */ > +static varinfo_t var_integer; > +static tree integer_tree; > +static unsigned int integer_id; > + Why are integers important as a class here? > +struct constraint_expr > +{ > + /* Constraint type. */ > + constraint_expr_type type; > + > + /* Variable we are referring to in the constraint. */ > + unsigned int var; > + > + /* Offset is in bits */ > + unsigned HOST_WIDE_INT offset; > Offset of what? Need to be more explicit here. > +/* Constraints are made up of two constraint expressions, one LHS, and one > + RHS. */ > Need to document what the constraint describes. > +struct constraint > +{ > + struct constraint_expr lhs; > + struct constraint_expr rhs; > +}; > + > +/* List of constraints that we use to build the constraint graph from. */ > + > +static VEC(constraint_t,gc) *constraints; > +static alloc_pool constraint_pool; > + > +/* An edge in the constraint graph. We technically have no use for > + the src, since it will always be the same node that we are indexing > + into the pred/succ arrays with, but it's nice for checking > + purposes. The edges are weighted, with a bit set in weights for > + each edge from src to dest with that weight. */ > + Document briefly how the weights are used and what they mean. > +/* The constraint graph is simply a set of adjacency vectors, one per > + variable. succs[x] is the vector of successors for variable x, and preds[x] > + is the vector of predecessors for variable x. > + IOW, all edges are "forward" edges, which is not like our CFG. > + So remember that > + preds[x]->src == x, and > + succs[x]->src == x*/ > + Here's where you also document what the whole constraint graph and edges mean as a whole. A code snippet with a couple of pointers and pointed-to variables and its corresponding constraint graph to illustrate will help. Particularly useful when following the pseudo-code for the constraint solver down below. Having a good intuitive idea of what the graph means is very helpful. > +/* Print out constraint C to FILE. */ > + > +void > +print_constraint (FILE *file, constraint_t c) > s/print/dump/, for consistency with most of the other IL debugging stuff. > +/* Print out constraint C to stdout. */ > + > +void > +debug_constraint (constraint_t c) > +{ > + print_constraint (stdout, c); > Better use 'stderr'. > +void > +print_constraints (FILE *file) > +{ > + int i; > + constraint_t c; > + for (i = 0; VEC_iterate (constraint_t, constraints, i, c); i++) > + print_constraint (file, c); > +} > + > +/* Print out all constraints to stdout. */ > + > +void > +debug_constraints (void) > +{ > + print_constraints (stdout); > +} > + Likewise. > +/* Insert constraint C into the list of complex constraints for VAR. */ > + > +static void > +insert_into_complicated (unsigned int var, constraint_t c) > s/complicated/complex/ > +static void > +clear_edges_for_node (constraint_graph_t graph, unsigned int node) > +{ > + VEC(constraint_edge_t,gc) *succvec = graph->succs[node]; > + VEC(constraint_edge_t,gc) *predvec = graph->preds[node]; > + constraint_edge_t c; > + int i; > + > + /* Walk the successors, erase the associated preds. */ > + for (i = 0; VEC_iterate (constraint_edge_t, succvec, i, c); i++) > + if (c->dest != node) > + { > + unsigned int place; > + struct constraint_edge lookfor; > + lookfor.src = c->dest; > + lookfor.dest = node; > + place = VEC_lower_bound (constraint_edge_t, graph->preds[c->dest], > + &lookfor, constraint_edge_less); > + VEC_ordered_remove (constraint_edge_t, graph->preds[c->dest], place); > + } > Vertical spacing. > + /* Walk the preds, erase the associated succs. */ > + for (i =0; VEC_iterate (constraint_edge_t, predvec, i, c); i++) > + if (c->dest != node) > + { > + unsigned int place; > + struct constraint_edge lookfor; > + lookfor.src = c->dest; > + lookfor.dest = node; > + place = VEC_lower_bound (constraint_edge_t, graph->succs[c->dest], > + &lookfor, constraint_edge_less); > + VEC_ordered_remove (constraint_edge_t, graph->succs[c->dest], place); > + } > + > + graph->preds[node] = NULL; > + graph->succs[node] = NULL; > +} > + > +static bool int_add_graph_edge (constraint_graph_t, unsigned int, > + unsigned int, unsigned HOST_WIDE_INT); > This fw declaration is not needed. > +static bool add_graph_edge (constraint_graph_t, struct constraint_edge); > +static bitmap get_graph_weights (constraint_graph_t, struct constraint_edge); > + Can these two be moved here instead of being forward declared? > +/* Information needed to compute the topographic ordering of a graph. */ > + > +struct topo_info > +{ > + /* sbitmap of visited nodes. */ > + sbitmap visited; > Vertical spacing. > + /* Array that stores the topographic order of the graph, *in > + reverse*. */ > + VEC(uint,heap) *topo_order; > +}; > + > +/* Initialize and return a topograph info structure. */ > + > +static struct topo_info * > +init_topo_info (void) > +{ > + size_t size = VEC_length (varinfo_t, varmap); > + struct topo_info *ti = xmalloc (sizeof (struct topo_info)); > + ti->visited = sbitmap_alloc (size); > + sbitmap_zero (ti->visited); > + ti->topo_order = VEC_alloc (uint, heap, 1); > + return ti; > +} > + > +/* Free the topographic sort info pointed to by TI. */ > + > +static void > +free_topo_info (struct topo_info *ti) > +{ > + sbitmap_free (ti->visited); > + VEC_free (uint, heap, ti->topo_order); > + free (ti); > +} > + > +/* Visit the graph in topographical order, and store the order in the > + topo_info structure. */ > + > +static void > +topo_visit (constraint_graph_t graph, struct topo_info *ti, > + unsigned int n) > +{ > + VEC(constraint_edge_t,gc) *succs = graph->succs[n]; > + constraint_edge_t c; > + int i; > + SET_BIT (ti->visited, n); > + for (i = 0; VEC_iterate (constraint_edge_t, succs, i, c); i++) > + { > + if (!TEST_BIT (ti->visited, c->dest)) > + topo_visit (graph, ti, c->dest); > + } > + VEC_safe_push (uint, heap, ti->topo_order, n); > +} > + > +/* Return true if variable N + OFFSET is a legal field of N. */ > + > +static bool > +type_safe (unsigned int n, unsigned HOST_WIDE_INT *offset) > +{ > + varinfo_t ninfo = get_varinfo (n); > + > + /* For things we've globbed to single variables, any offset into the > + variable acts like the entire variable, so that it becomes offset > + 0. */ > + if (n == anything_id > + || ninfo->is_artificial_var > + || ninfo->is_unknown_size_var) > + { > + *offset = 0; > + return true; > + } > + return n > anything_id > + && (get_varinfo (n)->offset + *offset) < get_varinfo (n)->fullsize; > +} > + > +/* Process a constraint C that represents *x = &y. */ > + > +static void > +do_da_constraint (constraint_graph_t graph ATTRIBUTE_UNUSED, > + constraint_t c, bitmap delta) > +{ > + unsigned int rhs = c->rhs.var; > + unsigned HOST_WIDE_INT offset = c->lhs.offset; > + unsigned int j; > + bitmap_iterator bi; > + > + /* For each member j of Delta (Sol(x)), add x to Sol(j) */ > + EXECUTE_IF_SET_IN_BITMAP (delta, 0, j, bi) > + { > + if (type_safe (j, &offset)) > + { > + /* *x != NULL && *x != ANYTHING*/ > Spacing. > +/* Compute a topographic order for GRAPH, and store the result in the > + topo_info structure TI. */ > + > +static void > +compute_topo_order (constraint_graph_t graph, > + struct topo_info *ti) > No line break. > +/* Perform offline variable substitution. > + > + This is a linear time way of identifying variables that must have > + equivalent points-to sets, including those caused by static cycles, > + and single entry subgraphs, in the constraint graph. > + > + The technique is described in "Off-line variable substitution for > + scaling points-to analysis" by Atanas Rountev and Satish Chandra, > + in "ACM SIGPLAN Notices" volume 35, number 5, pages 47-56. */ > + > +static void > +perform_rountev_chandra (constraint_graph_t graph) > s/perform_rountev_chandra/perform_var_subst/ or something more meaningful. Likewise for all the other instances of rountev_chandra. > +{ > + struct topo_info *ti = init_topo_info (); > + > + /* Compute the topographic ordering of the graph, then visit each > + node in topographic order. */ > + compute_topo_order (graph, ti); > + > + while (VEC_length (uint, ti->topo_order) != 0) > + { > + unsigned int i = VEC_pop (uint, ti->topo_order); > + unsigned int pred; > + varinfo_t vi = get_varinfo (i); > + bool okay_to_elim = false; > + unsigned int root = VEC_length (varinfo_t, varmap); > + VEC(constraint_edge_t,gc) *predvec = graph->preds[i]; > + constraint_edge_t ce; > + bitmap tmp; > + > + /* We can't eliminate things whose address is taken, or which is > + the target of a dereference. */ > + if (vi->address_taken || vi->indirect_target) > + continue; > + > + /* See if all predecessors of i are ripe for elimination */ > Capitalize identifier references. > + /* Theorem 4 in Rountev and Chandra: If i is a direct node, > + then Solution(i) is a subset of Solution (w), where w is a > + predecessor in the graph. > + Corrolary: If all predecessors of i have the same > + points-to set, then i has that same points-to set as > + those predecessors. */ > Likewise. Add some vertical spacing too. > + /* Process the complicated constraints */ > s/complicated/complex/ > + /* Globals are considered to be the equivalent of anything. We can > + do significantly better than this without too much trouble, by > + saying a global points to anything, so that taking the address of > + a global is fine. */ > +#if 0 > + if (DECL_P (t) > + && is_global_var (t) > + && !TREE_READONLY (t)) > + { > + cexpr.type = ADDRESSOF; > + cexpr.var = anything_id; > + } > + else > +#endif > Why this #if 0? > + if (TREE_READONLY (t)) > + { > + cexpr.type = ADDRESSOF; > + cexpr.var = readonly_id; > + } > + else > + cexpr.var = get_id_for_tree (t); > + > + cexpr.offset = 0; > + return cexpr; > +} > + > +/* Process a completed constraint T, and add it to the constraint > + list. */ > + > +static void > +process_constraint (constraint_t t) > +{ > + struct constraint_expr rhs = t->rhs; > + struct constraint_expr lhs = t->lhs; > + > + gcc_assert (rhs.var < VEC_length (varinfo_t, varmap)); > + gcc_assert (lhs.var < VEC_length (varinfo_t, varmap)); > + > + /* ANYTHING == ANYTHING is pointless. */ > + if (lhs.var == anything_id && rhs.var == anything_id) > + return; > + > + /* If we have &ANYTHING = something, convert to SOMETHING = &ANYTHING) */ > + else if (lhs.var == anything_id && lhs.type == ADDRESSOF) > + { > + rhs = t->lhs; > + t->lhs = t->rhs; > + t->rhs = rhs; > + process_constraint (t); > + } > + /* This can happen in our IR with things like n->a = *p */ > + else if (rhs.type == DEREF && lhs.type == DEREF && rhs.var != anything_id) > + { > + /* Split into tmp = *rhs, *lhs = tmp */ > + tree rhsdecl = get_varinfo (rhs.var)->decl; > + tree pointertype = TREE_TYPE (rhsdecl); > + tree pointedtotype = TREE_TYPE (pointertype); > + tree tmpvar = create_tmp_var_raw (pointedtotype, "doubledereftmp"); > + struct constraint_expr tmplhs = get_constraint_exp_from_ssa_var (tmpvar); > + > + /* If this is an aggregate of known size, we should have passed > + this off to do_structure_copy, and it should have broken it > + up. */ > + gcc_assert (!AGGREGATE_TYPE_P (pointedtotype) > + || get_varinfo (rhs.var)->is_unknown_size_var); > + > + process_constraint (new_constraint (tmplhs, rhs)); > + process_constraint (new_constraint (lhs, tmplhs)); > + } > + else if (rhs.type == ADDRESSOF) > + { > + varinfo_t vi; > + gcc_assert (rhs.offset == 0); > + > + for (vi = get_varinfo (rhs.var); vi != NULL; vi = vi->next) > + vi->address_taken = true; > + > + VEC_safe_push (constraint_t, gc, constraints, t); > + } > + else > + { > + if (lhs.type != DEREF && rhs.type == DEREF) > + get_varinfo (lhs.var)->indirect_target = true; > + VEC_safe_push (constraint_t, gc, constraints, t); > + } > +} > + > + > +/* Return the position, in bits, of FIELD_DECL from the beginning of it's > + structure. */ > + s/it's/its/ > + + tree_low_cst (DECL_FIELD_BIT_OFFSET (fdecl), 1); > +} > + > +/* Given a component ref, return the constraint_expr for it. */ > + s/a component ref/a COMPONENT_REF T/ > + if (result.type == SCALAR) > + { > + /* In Soviet Russia, you can access one past the end of an > + array. > Ahem. > + /* If this variable already has subvars, just create the variables for the > + subvars and we are done. > + NOTE: This assumes things haven't generated uses of previously > + unused structure fields, like, say, SRA :P*/ > Spacing. Why the smiley? > + /* If the variable doesn't have subvars, we may end up needing to > + create fake variables for the fields, and do a whole bunch of work */ > + You may want to expand on what "whole bunch of work" means when we can't do subvars. > +/* Given a pointer variable P, fill in it's points-to set, or return > + false if we can't. */ > + s/it's/its/ > +bool > +find_what_p_points_to (tree p) > +{ > + unsigned int id = 0; > + if (!have_alias_info) > + return false; > + if (lookup_id_for_tree (p, &id)) > + { > + varinfo_t vi = get_varinfo (id); > + > + if (vi->is_artificial_var) > + return false; > + > + /* See if this is a field or a structure */ > + if (vi->size != vi->fullsize) > + { > + if (!var_can_have_subvars (vi->decl) > + || get_subvars_for_var (vi->decl) == NULL) > + return false; > + } > + else > + { > + struct ptr_info_def *pi = get_ptr_info (p); > + unsigned int i; > + bitmap_iterator bi; > + /* This variable may have been collapsed, let's get the real > + variable. */ > + vi = get_varinfo (vi->node); > + > + /* Make sure there aren't any artificial vars in the points > + to set. XXX: Note that we need to translate our heap > + variables to something. */ > + EXECUTE_IF_SET_IN_BITMAP (vi->solution, 0, i, bi) > + { > + if (get_varinfo (i)->is_artificial_var) > + return false; > + } > + pi->pt_anything = false; > + if (!pi->pt_vars) > + pi->pt_vars = BITMAP_GGC_ALLOC (); > + set_uids_in_ptset (pi->pt_vars, vi->solution); > + return true; > + } > + } > + return false; > +} > + > +/* Create points-to sets for the current function. */ > + > +static void > +create_alias_vars (void) > +{ > + basic_block bb; > + struct constraint_expr lhs, rhs; > + > + bitmap_obstack_initialize (&ptabitmap_obstack); > + constraint_pool = create_alloc_pool ("Constraint pool", > + sizeof (struct constraint), 30); > + variable_info_pool = create_alloc_pool ("Variable info pool", > + sizeof (struct variable_info), 30); > + constraint_edge_pool = create_alloc_pool ("Constraint edges", > + sizeof (struct constraint_edge), 30); > Move all the init stuff into its own function. Perhaps we can also factor the code creating all the special variables? (NULL, ANYTHING, etc). > + if (dump_file) > + { > + unsigned int i; > + if (dump_flags & TDF_STATS) > + { > + fprintf (dump_file, "Stats:\n"); > + fprintf (dump_file, "Total vars:%d\n", stats.total_vars); > + fprintf (dump_file, "Statically unified vars:%d\n", stats.unified_vars_static); > + fprintf (dump_file, "Collapsed vars:%d\n", stats.collapsed_vars); > + fprintf (dump_file, "Dynamically unified vars:%d\n", stats.unified_vars_dynamic); > + fprintf (dump_file, "Iterations:%d\n", stats.iterations); > + } > + for (i = 0; i < VEC_length (varinfo_t, varmap); i++) > + print_solution_for_var (dump_file, i); > + Make this dump_points_to/debug_points_to or something along those lines. Diego.

**Follow-Ups**:**Re: [PATCH] Structure aliasing analyzer***From:*Daniel Berlin

Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|

Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |