This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH 3/7] IPA-CP escape and clobber analysis


Hi,

this patch is rather big but not overly complicated.  Its goal is to
figure out whether data passed to a function by reference escapes
(somewhere, not necessarily in that particular function) and is
potentially clobbered (in that one function or its callees).

The result is stored into call graph node global structure, at least
for now, because it is supposed to live longer than IPA-CP
optimization info and be available for PTA later in the pipeline.
Before that, however, quite a lot of intermediate results are stored
in a number of places.  First of all, there is a vector describing all
SSA names and address taken local aggregates which is used to figure
out relations between them and do the local escape and clobber
analysis (I am aware that a local aggregate might incorrectly pass as
non-clobbered, that is fixed by the fifth patch, this one is big
enough as it is and it does not really matter here).

We then store the local results describing formal parameters and
so-far-presumed-unescaped aggregates and malloced data that is passed
as actual arguments to other functions into a new vector ref_descs.  I
did not store this into the existing descriptors vector because there
are often more elements.  Also, I had to extend the UNKNOWN,
KNOWN_TYPE and CONSTANT jump functions with an index into this new
vector (PASS_THROUGH and ANCESTOR reuse the index into parameters), so
there is quite a lot of new getter and setter methods.

This information is used by simple queue based interprocedural
propagation.  Eventually, the information is stored into the call
graph node, as described above.  After propagation, data in ref_descs
and in the call graph are the same, only the call graph can live much
longer.  One set of flags that is not copied to call graph nodes are
callee_clobbered flags, which only IPA-CP uses it in a subsequent
patch (and which would require maintenance during inlining).

There are more uses of the flags introduced by subsequent patches.  In
this one, the only one is that IPA-CP modification phase is able to
use the results instead of querying AA and is capable of doing more
replacements of aggregate values when the aggregate is unescaped and
not clobbered.

The following table summarizes what the pass can discover now.  All
compilations are with -Ofast -flto.  (I should have counted only
pointer typed parameters but well, that thought occurred to me too
late.  All non-pointer ones are automatically considered clobbered.)
Please note that in Fortran benchmarks, this information is often
already available through fnspec flags.  But we can discover a few
more (see the last patch for some more information).

 |                    |        |          |       |           |       |    Callee |       |
 | Test               | Params | Noescape |     % | Noclobber |     % | noclobber |     % |
 |                    |        |          |       |           |       |           |       |
 |--------------------+--------+----------+-------+-----------+-------+-----------+-------+
 | FF libxul.so       | 462725 |    10422 |  2.25 |      4954 |  1.07 |      8872 |  1.92 |
 | Tramp 3D           |   6344 |     1019 | 16.06 |       985 | 15.53 |      1005 | 15.84 |
 |--------------------+--------+----------+-------+-----------+-------+-----------+-------+
 | perlbench          |   2550 |       87 |  3.41 |        10 |  0.39 |        61 |  2.39 |
 | bzip               |    194 |       28 | 14.43 |         1 |  0.52 |        13 |  6.70 |
 | gcc                |  10725 |      179 |  1.67 |        18 |  0.17 |       147 |  1.37 |
 | mcf                |     57 |        4 |  7.02 |         0 |  0.00 |         4 |  7.02 |
 | gobmk              |   8873 |      132 |  1.49 |         3 |  0.03 |        85 |  0.96 |
 | hmmer              |    643 |       71 | 11.04 |         8 |  1.24 |        64 |  9.95 |
 | sjeng              |    161 |        5 |  3.11 |         0 |  0.00 |         5 |  3.11 |
 | libquantum         |    187 |       48 | 25.67 |         6 |  3.21 |        14 |  7.49 |
 | h264ref            |   1092 |       48 |  4.40 |         4 |  0.37 |        47 |  4.30 |
 | astar              |    217 |       28 | 12.90 |         3 |  1.38 |        15 |  6.91 |
 | xalancbmk          |  28861 |      737 |  2.55 |       536 |  1.86 |       712 |  2.47 |
 |--------------------+--------+----------+-------+-----------+-------+-----------+-------+
 | bwaves             |     74 |       35 | 47.30 |        25 | 33.78 |        35 | 47.30 |
 | gamess             |  26059 |     3693 | 14.17 |      2796 | 10.73 |      3572 | 13.71 |
 | milc               |    429 |       22 |  5.13 |        11 |  2.56 |        22 |  5.13 |
 | zeusmp             |    284 |       31 | 10.92 |         2 |  0.70 |        31 | 10.92 |
 | gromacs            |   5514 |      230 |  4.17 |        54 |  0.98 |       202 |  3.66 |
 | cactusADM          |   2354 |       49 |  2.08 |        13 |  0.55 |        44 |  1.87 |
 | leslie3d           |     18 |        0 |  0.00 |         0 |  0.00 |         0 |  0.00 |
 | namd               |    163 |        0 |  0.00 |         0 |  0.00 |         0 |  0.00 |
 | soplex             |   2341 |       80 |  3.42 |        10 |  0.43 |        55 |  2.35 |
 | povray             |   4046 |      244 |  6.03 |        51 |  1.26 |       201 |  4.97 |
 | calculix           |   6260 |     1109 | 17.72 |       672 | 10.73 |       933 | 14.90 |
 | GemsFDTD           |    289 |       41 | 14.19 |        27 |  9.34 |        32 | 11.07 |
 | tonto              |   7255 |     1361 | 18.76 |      1178 | 16.24 |      1329 | 18.32 |
 | lbm                |     27 |        4 | 14.81 |         3 | 11.11 |         4 | 14.81 |
 | wrf                |  14212 |     4375 | 30.78 |      3358 | 23.63 |      4120 | 28.99 |
 | sphinx3            |    770 |       16 |  2.08 |         1 |  0.13 |        15 |  1.95 |
 |--------------------+--------+----------+-------+-----------+-------+-----------+-------+
 | ac.f90             |     21 |       14 | 66.67 |         7 | 33.33 |        14 | 66.67 |
 | aermod.f90         |    600 |      134 | 22.33 |        59 |  9.83 |       124 | 20.67 |
 | air.f90            |     85 |       41 | 48.24 |        14 | 16.47 |        41 | 48.24 |
 | capacita.f90       |     42 |       18 | 42.86 |        16 | 38.10 |        18 | 42.86 |
 | channel2.f90       |     12 |        4 | 33.33 |         4 | 33.33 |         4 | 33.33 |
 | doduc.f90          |    132 |       68 | 51.52 |        39 | 29.55 |        68 | 51.52 |
 | fatigue2.f90       |     65 |       43 | 66.15 |        20 | 30.77 |        43 | 66.15 |
 | gas_dyn2.f90       |     97 |       22 | 22.68 |         6 |  6.19 |        21 | 21.65 |
 | induct2.f90        |    121 |       41 | 33.88 |        24 | 19.83 |        41 | 33.88 |
 | linpk.f90          |     42 |       10 | 23.81 |         7 | 16.67 |        10 | 23.81 |
 | mdbx.f90           |     51 |       26 | 50.98 |         9 | 17.65 |        26 | 50.98 |
 | mp_prop_design.f90 |      2 |        0 |  0.00 |         0 |  0.00 |         0 |  0.00 |
 | nf.f90             |     41 |        8 | 19.51 |         8 | 19.51 |         8 | 19.51 |
 | protein.f90        |    116 |       40 | 34.48 |        25 | 21.55 |        35 | 30.17 |
 | rnflow.f90         |    212 |       54 | 25.47 |        37 | 17.45 |        51 | 24.06 |
 | test_fpu2.f90      |    160 |       22 | 13.75 |        14 |  8.75 |        18 | 11.25 |
 | tfft2.f90          |      7 |        3 | 42.86 |         0 |  0.00 |         3 | 42.86 |

I hope to improve the results for example by propagating malloc
attribute to callers.

I have bootstrapped and tested this on x86_64, additionally I also
checked it passes an LTO-bootstrap and LTO-built Firefox.  I assume
there will be many comments but after I address them, I'd like to
commit this to trunk.

Thanks,

Martin


2014-04-30  Martin Jambor  <mjambor@suse.cz>

	* cgraph.h (cgraph_global_info): New fields noescape_parameters
	and noclobber_parameters.
	(cgraph_param_noescape_p): Declare.
	(cgraph_set_param_noescape): Likewise.
	(cgraph_param_noclobber_p): Likewise.
	(cgraph_set_param_noclobber): Likewise.
	* ipa-prop.h (ipa_unknown_data): New type.
	(ipa_known_type_data): New fields escape_ref_valid and
	escape_ref_index.
	(ipa_constant_data): Likewise.
	(jump_func_value): New field unknown.
	(ipa_get_jf_unknown_esc_ref_valid): New function.
	(ipa_get_jf_unknown_esc_ref_index): Likewise.
	(ipa_get_jf_known_type_esc_ref_valid): Likewise.
	(ipa_get_jf_known_type_esc_ref_index): Likewise.
	(ipa_get_jf_constant_esc_ref_valid): Likewise.
	(ipa_get_jf_constant_esc_ref_index): Likewise.
	(ipa_ref_descriptor): New type.
	(ipa_node_params): New fields ref_descs and node_up_enqueued.
	(ipa_is_ref_escaped): New function.
	(ipa_is_ref_clobbered): Likewise.
	(ipa_is_ref_callee_clobbered): Likewise.
	(ipa_is_param_ref_safely_constant): Likewise.
	(ipa_spread_escapes): Declare.
	* ipa-prop.c: Include stringpool.h, tree-ssaname.h and pointer-set.h.
	(ipa_escape): New type.
	(valid_escape_result_index): New function.
	(func_body_info): New fields func, escapes and decl_escapes.
	(ipa_print_node_jump_functions_for_edge): Dump new fields.
	(ipa_set_jf_unknown): New function.  Use it instead of directly
	setting a jump functions type elsewhere.
	(ipa_set_jf_unknown_copy): New function.
	(ipa_set_jf_unknown_ref_index): Likewise.
	(ipa_set_jf_known_type_copy): Likewise.
	(ipa_set_jf_known_type): Initialize new fields.
	(ipa_set_jf_known_type_ref_index): New function.
	(ipa_set_jf_constant): Initialize new fields.
	(ipa_set_jf_constant_ref_index): New function.
	(ipa_get_tracked_refs_count): Likewise.
	(ipa_set_ref_clobbered): Likewise.
	(ipa_get_tracked_refs_count): Likewise.
	(ipa_set_ref_escaped): Likewise.
	(ipa_set_ref_clobbered): Likewise.
	(ipa_set_ref_callee_clobbered): Likewise.
	(ipa_load_from_parm_agg_1): Use const_ref parameter flag.
	(get_escape_for_ref): New function.
	(get_escape_for_value): Likewise.
	(ipa_compute_jump_functions_for_edge): Add reference info to jump
	functions.  Wrapped comments to 80 columns, added a checking assert
	all jump functions start with no information.
	(visit_ref_for_mod_analysis): Renamed to visit_ref_mark_it_used.
	Simplified comment.
	(ipa_analyze_params_uses_in_bb): Renamed to ipa_analyze_bb_statements.
	Simplified comment.
	(analyze_phi_escapes): New function.
	(analyze_ssa_escape): Likewise.
	(analyze_all_ssa_escapes): Likewise.
	(create_escape_structures): Likewise.
	(free_escape_structures): Likewise.
	(pick_escapes_from_call): Likewise.
	(gather_picked_escapes): Likewise.
	(ipa_analyze_node): Initialize and deinitialize new fbi fields and
	escape structures, call create_escape_structures,
	analyze_all_ssa_escapes and pick_escapes_from_call, assign ref indices
	to formal parameters.
	(escape_spreading_data): New type.
	(enque_to_propagate_escapes_up): New function.
	(enque_to_propagate_escapes_down): Likewise.
	(escape_origin_from_jfunc): Likewise.
	(spread_escapes_up_from_one_alias): Likewise.
	(spread_escapes_up): Likewise.
	(spread_escapes_down): Likewise.
	(ipa_spread_escapes): Likewise.
	(make_unknown_jf_from_known_type_jf): Likewise.
	(combine_known_type_and_ancestor_jfs): Also update ref index fields.
	Switch arguments for consistency, changed the one caller.
	(update_jump_functions_after_inlining): Also update ref index fields,
	make use of unescaped info.
	(update_indirect_edges_after_inlining): Make use of unescaped info.
	(ipa_free_node_params_substructures): Free also ref_desc vector.
	(ipa_node_duplication_hook): Also copy reference descriptor vector and
	const_refs.
	(ipa_print_node_params): Also print reference flags.
	(ipa_write_jump_function): Stream new fields.
	(ipa_read_jump_function): Likewise.
	(ipa_write_node_info): Stream reference description.
	(ipa_read_node_info): Likewise, also clear new flag node_up_enqueued.
	(read_agg_replacement_chain): Whitespace fix.
	(adjust_agg_replacement_values): Also assign const_refs in descriptors
	from those in tranformation data.
	(ipcp_transform_function): Initialize new fields of fbi.
	* ipa-cp.c (agg_pass_through_permissible_p): Make use of the new
	escape information.  Accept caller_infom as a parameter, updated all
	callers.
	(propagate_aggs_accross_jump_function): Make use of the new escape
	information.
	(intersect_aggregates_with_edge): Bail out early if a pass_through
	jump function does not allow passing aggregates.  Make use of the new
	escape information.  Allow NULL values in aggregate jump functions.
	(ipcp_driver): Call spread_escapes.
	* ipa-inline.c (ipa_inline): Call spread_escapes if necessary.
	* cgraph.c (cgraph_param_noescape_p): New function.
	(cgraph_set_param_noescape): Likewise.
	(cgraph_param_noclobber_p): Likewise.
	(cgraph_set_param_noclobber): Likewise.
	* cgraphclones.c (duplicate_thunk_for_node): Assert that noclone and
	noescape bitmaps are NULL.
	(copy_noescape_noclobber_bitmaps): New function.
	(cgraph_clone_node): Copy noescpae and noclobber bitmaps.
	(cgraph_copy_node_for_versioning): Likewise.
	* lto-cgraph.c (output_param_bitmap): Likewise.
	(output_node_opt_summary): Use it to stream args_to_skip,
	combined_args_to_skip, noescape_parameters and noclobber_parameters
	bitmaps.
	(input_param_bitmap): New function.
	(input_node_opt_summary): Use it to stream args_to_skip,
	combined_args_to_skip, noescape_parameters and noclobber_parameters
	bitmaps.
	* tree-inline.c (update_noescape_noclobber_bitmaps): New function.
	(tree_function_versioning): Call it.

testsuite/
	* gcc.dg/ipa/ipcp-agg-10.c: New test.

Index: src/gcc/ipa-prop.c
===================================================================
--- src.orig/gcc/ipa-prop.c
+++ src/gcc/ipa-prop.c
@@ -43,6 +43,8 @@ along with GCC; see the file COPYING3.
 #include "gimple-ssa.h"
 #include "tree-cfg.h"
 #include "tree-phinodes.h"
+#include "stringpool.h"
+#include "tree-ssanames.h"
 #include "ssa-iterators.h"
 #include "tree-into-ssa.h"
 #include "tree-dfa.h"
@@ -60,6 +62,7 @@ along with GCC; see the file COPYING3.
 #include "stringpool.h"
 #include "tree-ssanames.h"
 #include "domwalk.h"
+#include "pointer-set.h"
 
 /* Intermediate information that we get from alias analysis about a particular
    parameter in a particular basic_block.  When a parameter or the memory it
@@ -91,11 +94,64 @@ struct ipa_bb_info
   vec<param_aa_status> param_aa_statuses;
 };
 
+/* Structure used for intra-procedural escape analysis (and associated
+   memory-write detection).  When analyzing function body, we have one for each
+   SSA name and for all address-taken local declarations.  */
+
+struct ipa_escape
+{
+  /* If target is non-NULL, this is the offset relative to the reference
+     described by target.  */
+  HOST_WIDE_INT offset;
+
+  /* If this describes (a part of) data described by other ipa_escape
+     structure, target is non-NULL.  In that case, that structure should be
+     used instead of this one and unless explicitely noted, other fields are
+     meaningless.  */
+  struct ipa_escape *target;
+
+  /* The last seen edge that had a reference to this data among its parameters.
+     Used to make sure we do not pass the same data in two different
+     arguments.  */
+  struct cgraph_edge *last_seen_cs;
+
+  /* Index of the bool slot where the analyzed flag is going to end up plus
+     one.  Zero means this structure will remain unused.  */
+  int result_index;
+
+  /* True if we have already dealt with this SSA name.  Valid even if target is
+     non-NULL.  */
+  bool analyzed;
+
+  /* Could the address of the data have escaped?  */
+  bool escaped;
+
+  /* Flag set when an SSA name has been used as a base for a memory write.
+     Only valid when the SSA name is not considered escaped, otherwise it might
+     be incorrectly clear.  */
+  bool write_base;
+};
+
+/* If ESC has a valid (i.e. non-zero) result_index, return true and store the
+   directly usable (i.e. decremented) index to *INDEX.  */
+
+static inline bool
+valid_escape_result_index (struct ipa_escape *esc, int *index)
+{
+  if (esc->result_index == 0)
+    return false;
+  *index = esc->result_index - 1;
+  return true;
+}
+
 /* Structure with global information that is only used when looking at function
    body. */
 
 struct func_body_info
 {
+  /* Struct function of the function that is being analyzed.  */
+  struct function *func;
+
   /* The node that is being analyzed.  */
   cgraph_node *node;
 
@@ -105,6 +161,13 @@ struct func_body_info
   /* Information about individual BBs. */
   vec<ipa_bb_info> bb_infos;
 
+  /* Escape analysis information for SSA flags and local addressable
+     declarations.  */
+  vec<ipa_escape> escapes;
+
+  /* Mapping from VAR_DECLS to escape information.  */
+  pointer_map <ipa_escape *> *decl_escapes;
+
   /* Number of parameters.  */
   int param_count;
 
@@ -282,7 +345,14 @@ ipa_print_node_jump_functions_for_edge (
 
       fprintf (f, "       param %d: ", i);
       if (type == IPA_JF_UNKNOWN)
-	fprintf (f, "UNKNOWN\n");
+	{
+	  fprintf (f, "UNKNOWN");
+	  if (ipa_get_jf_unknown_esc_ref_valid (jump_func))
+	    fprintf (f, ", escape ref: %i\n",
+		     ipa_get_jf_unknown_esc_ref_index (jump_func));
+	  else
+	    fprintf (f, "\n");
+	}
       else if (type == IPA_JF_KNOWN_TYPE)
 	{
 	  fprintf (f, "KNOWN TYPE: base  ");
@@ -290,6 +360,9 @@ ipa_print_node_jump_functions_for_edge (
 	  fprintf (f, ", offset "HOST_WIDE_INT_PRINT_DEC", component ",
 		   jump_func->value.known_type.offset);
 	  print_generic_expr (f, jump_func->value.known_type.component_type, 0);
+	  if (ipa_get_jf_known_type_esc_ref_valid (jump_func))
+	    fprintf (f, ", escape ref: %i",
+		     ipa_get_jf_known_type_esc_ref_index (jump_func));
 	  fprintf (f, "\n");
 	}
       else if (type == IPA_JF_CONST)
@@ -304,6 +377,9 @@ ipa_print_node_jump_functions_for_edge (
 	      print_generic_expr (f, DECL_INITIAL (TREE_OPERAND (val, 0)),
 				  0);
 	    }
+	  if (ipa_get_jf_constant_esc_ref_valid (jump_func))
+	    fprintf (f, ", escape ref: %i",
+		     ipa_get_jf_constant_esc_ref_index (jump_func));
 	  fprintf (f, "\n");
 	}
       else if (type == IPA_JF_PASS_THROUGH)
@@ -430,6 +506,39 @@ ipa_print_all_jump_functions (FILE *f)
     }
 }
 
+/* Set jfunc to be a jump function with invalid reference index.  */
+
+static void
+ipa_set_jf_unknown (struct ipa_jump_func *jfunc)
+{
+  jfunc->type = IPA_JF_UNKNOWN;
+  jfunc->value.unknown.escape_ref_valid = false;
+}
+
+/* Set JFUNC to be a copy of another unknown jump function SRC. */
+
+static void
+ipa_set_jf_unknown_copy (struct ipa_jump_func *dst,
+			 struct ipa_jump_func *src)
+
+{
+  gcc_checking_assert (src->type == IPA_JF_UNKNOWN);
+  dst->type = IPA_JF_UNKNOWN;
+  dst->value.unknown = src->value.unknown;
+}
+
+/* Set reference description of unknown JFUNC to be valid and referring to
+   INDEX.  */
+
+static void
+ipa_set_jf_unknown_ref_index (struct ipa_jump_func *jfunc, int index)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_UNKNOWN);
+  gcc_checking_assert (index >= 0);
+  jfunc->value.unknown.escape_ref_valid = true;
+  jfunc->value.unknown.escape_ref_index = index;
+}
+
 /* Set JFUNC to be a known type jump function.  */
 
 static void
@@ -445,11 +554,37 @@ ipa_set_jf_known_type (struct ipa_jump_f
   jfunc->value.known_type.offset = offset,
   jfunc->value.known_type.base_type = base_type;
   jfunc->value.known_type.component_type = component_type;
+  jfunc->value.known_type.escape_ref_valid = false;
+  jfunc->value.known_type.escape_ref_index = 0;
   gcc_assert (component_type);
 }
 
-/* Set JFUNC to be a copy of another jmp (to be used by jump function
-   combination code).  The two functions will share their rdesc.  */
+/* Set reference description of known_type JFUNC to be valid and referring to
+   INDEX.  */
+
+static void
+ipa_set_jf_known_type_ref_index (struct ipa_jump_func *jfunc, int index)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_KNOWN_TYPE);
+  gcc_checking_assert (index >= 0);
+  jfunc->value.known_type.escape_ref_valid = true;
+  jfunc->value.known_type.escape_ref_index = index;
+}
+
+/* Set DST to be a copy of another known type jump function SRC.  */
+
+static void
+ipa_set_jf_known_type_copy (struct ipa_jump_func *dst,
+			    struct ipa_jump_func *src)
+
+{
+  gcc_checking_assert (src->type == IPA_JF_KNOWN_TYPE);
+  dst->type = IPA_JF_KNOWN_TYPE;
+  dst->value.known_type = src->value.known_type;
+}
+
+/* Set DST to be a copy of another constant jump function SRC.  The two
+   functions will share their rdesc.  */
 
 static void
 ipa_set_jf_cst_copy (struct ipa_jump_func *dst,
@@ -472,6 +607,8 @@ ipa_set_jf_constant (struct ipa_jump_fun
     SET_EXPR_LOCATION (constant, UNKNOWN_LOCATION);
   jfunc->type = IPA_JF_CONST;
   jfunc->value.constant.value = unshare_expr_without_location (constant);
+  jfunc->value.constant.escape_ref_valid = false;
+  jfunc->value.constant.escape_ref_index = 0;
 
   if (TREE_CODE (constant) == ADDR_EXPR
       && TREE_CODE (TREE_OPERAND (constant, 0)) == FUNCTION_DECL)
@@ -491,6 +628,19 @@ ipa_set_jf_constant (struct ipa_jump_fun
     jfunc->value.constant.rdesc = NULL;
 }
 
+/* Set reference description of constant JFUNC to be valid and referring to
+   INDEX.  */
+
+static void
+ipa_set_jf_constant_ref_index (struct ipa_jump_func *jfunc, int index)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_CONST);
+  gcc_checking_assert (index >= 0);
+  jfunc->value.constant.escape_ref_valid = true;
+  jfunc->value.constant.escape_ref_index = index;
+}
+
+
 /* Set JFUNC to be a simple pass-through jump function.  */
 static void
 ipa_set_jf_simple_pass_through (struct ipa_jump_func *jfunc, int formal_id,
@@ -539,6 +689,41 @@ ipa_set_ancestor_jf (struct ipa_jump_fun
   jfunc->value.ancestor.type_preserved = type_preserved;
 }
 
+/* Return the number of references tracked for escape analysis in INFO.  */
+
+static inline int
+ipa_get_tracked_refs_count (struct ipa_node_params *info)
+{
+  return info->ref_descs.length ();
+}
+
+/* Set escape flag of reference number I of a function corresponding to NODE to
+   VAL.  */
+
+static inline void
+ipa_set_ref_escaped (struct ipa_node_params *info, int i, bool val)
+{
+  info->ref_descs[i].escaped = val;
+}
+
+/* Set the clobbered flag corresponding to the Ith tracked reference of the
+   function associated with INFO to VAL.  */
+
+static inline void
+ipa_set_ref_clobbered (struct ipa_node_params *info, int i, bool val)
+{
+  info->ref_descs[i].clobbered = val;
+}
+
+/* Set the callee_clobbered flag corresponding to the Ith tracked reference of
+   the function associated with INFO to VAL.  */
+
+static inline void
+ipa_set_ref_callee_clobbered (struct ipa_node_params *info, int i, bool val)
+{
+  info->ref_descs[i].callee_clobbered = val;
+}
+
 /* Extract the acual BINFO being described by JFUNC which must be a known type
    jump function.  */
 
@@ -784,7 +969,7 @@ detect_type_change (tree arg, tree base,
   if (!tci.known_current_type
       || tci.multiple_types_encountered
       || offset != 0)
-    jfunc->type = IPA_JF_UNKNOWN;
+    ipa_set_jf_unknown (jfunc);
   else
     ipa_set_jf_known_type (jfunc, 0, tci.known_current_type, comp_type);
 
@@ -1090,7 +1275,8 @@ ipa_load_from_parm_agg_1 (struct func_bo
     }
 
   if (index >= 0
-      && parm_ref_data_preserved_p (fbi, index, stmt, op))
+      && ((fbi && cgraph_param_noclobber_p (fbi->node, index))
+	  || parm_ref_data_preserved_p (fbi, index, stmt, op)))
     {
       *index_p = index;
       *by_ref_p = true;
@@ -1725,6 +1911,86 @@ ipa_get_callee_param_type (struct cgraph
   return NULL;
 }
 
+static void
+analyze_ssa_escape (struct func_body_info *fbi, tree ssa,
+		    struct ipa_escape *esc);
+
+/* Return the ipa_escape structure suitable for REFERENCE, if it is a
+   declaration or a MEM_REF.  Return NULL if there is no structure describing
+   REFERENCE.  If a non-NULL result is returned, put the offset of the
+   REFERENCE relative to the start of data described by the result into
+   *OFFSET, and size and max_size as returned by get_ref_base_and_extent to
+   *SIZE and *MAX_SIZE respectively.  */
+
+static struct ipa_escape *
+get_escape_for_ref (struct func_body_info *fbi, tree reference,
+		    HOST_WIDE_INT *offset, HOST_WIDE_INT *size,
+		    HOST_WIDE_INT *max_size)
+{
+  struct ipa_escape *res;
+  tree base = get_ref_base_and_extent (reference, offset, size, max_size);
+
+  if (DECL_P (base))
+    {
+      ipa_escape **d_esc = fbi->decl_escapes->contains (base);
+      if (!d_esc)
+	return NULL;
+      res = *d_esc;
+    }
+  else if (TREE_CODE (base) == MEM_REF
+	   && TREE_CODE (TREE_OPERAND (base, 0)) == SSA_NAME)
+    {
+      tree ssa = TREE_OPERAND (base, 0);
+      res = &fbi->escapes[SSA_NAME_VERSION (ssa)];
+      if (!res->analyzed)
+	analyze_ssa_escape (fbi, ssa, res);
+    }
+  else
+    return NULL;
+
+  if (res->target)
+    {
+      *offset += res->offset;
+      res = res->target;
+    }
+  return res;
+}
+
+/* Return the ipa_escape structure suitable for T, if it is an ssa_name or an
+   ADDR_EXPR.  Return NULL if there is not structure for T.  If a non-NULL
+   result is returned, put the offset of the value T relative to the start of
+   data described by the result into *OFFSET.  */
+
+static struct ipa_escape *
+get_escape_for_value (struct func_body_info *fbi, tree t,
+		      HOST_WIDE_INT *offset)
+{
+  if (TREE_CODE (t) == SSA_NAME)
+    {
+      struct ipa_escape *res;
+      *offset = 0;
+      res = &fbi->escapes[SSA_NAME_VERSION (t)];
+      if (!res->analyzed)
+	analyze_ssa_escape (fbi, t, res);
+
+      if (res->target)
+	{
+	  *offset += res->offset;
+	  res = res->target;
+	}
+
+      return res;
+    }
+  else if (TREE_CODE (t) == ADDR_EXPR)
+    {
+      HOST_WIDE_INT dummy_size, dummy_max_size;
+      return get_escape_for_ref (fbi, TREE_OPERAND (t, 0), offset, &dummy_size,
+				 &dummy_max_size);
+    }
+  else
+    return NULL;
+}
+
 /* Compute jump function for all arguments of callsite CS and insert the
    information in the jump_functions array in the ipa_edge_args corresponding
    to this callsite.  */
@@ -1753,6 +2019,8 @@ ipa_compute_jump_functions_for_edge (str
       tree arg = gimple_call_arg (call, n);
       tree param_type = ipa_get_callee_param_type (cs, n);
 
+      gcc_checking_assert (jfunc->type == IPA_JF_UNKNOWN
+			   && !ipa_get_jf_unknown_esc_ref_valid (jfunc));
       if (is_gimple_ip_invariant (arg))
 	ipa_set_jf_constant (jfunc, arg, cs);
       else if (!is_gimple_reg_type (TREE_TYPE (arg))
@@ -1807,19 +2075,42 @@ ipa_compute_jump_functions_for_edge (str
 				      ? TREE_TYPE (param_type)
 				      : NULL);
 
-      /* If ARG is pointer, we can not use its type to determine the type of aggregate
-	 passed (because type conversions are ignored in gimple).  Usually we can
-	 safely get type from function declaration, but in case of K&R prototypes or
-	 variadic functions we can try our luck with type of the pointer passed.
-	 TODO: Since we look for actual initialization of the memory object, we may better
-	 work out the type based on the memory stores we find.  */
+      /* If ARG is pointer, we can not use its type to determine the type of
+	 aggregate passed (because type conversions are ignored in gimple).
+	 Usually we can safely get type from function declaration, but in case
+	 of K&R prototypes or variadic functions we can try our luck with type
+	 of the pointer passed.
+	 TODO: Since we look for actual initialization of the memory object, we
+	 may better work out the type based on the memory stores we find.  */
       if (!param_type)
 	param_type = TREE_TYPE (arg);
 
-      if ((jfunc->type != IPA_JF_PASS_THROUGH
-	      || !ipa_get_jf_pass_through_agg_preserved (jfunc))
-	  && (jfunc->type != IPA_JF_ANCESTOR
-	      || !ipa_get_jf_ancestor_agg_preserved (jfunc))
+      HOST_WIDE_INT dummy_offset;
+      struct ipa_escape *esc = get_escape_for_value (fbi, arg, &dummy_offset);
+      int ref_index;
+      if (esc && valid_escape_result_index (esc, &ref_index))
+	{
+	  if (jfunc->type == IPA_JF_UNKNOWN)
+	    ipa_set_jf_unknown_ref_index (jfunc, ref_index);
+	  else if (jfunc->type == IPA_JF_KNOWN_TYPE)
+	    ipa_set_jf_known_type_ref_index (jfunc, ref_index);
+	  else if (jfunc->type == IPA_JF_CONST)
+	    ipa_set_jf_constant_ref_index (jfunc, ref_index);
+	  else
+	    {
+	      gcc_checking_assert
+		(jfunc->type != IPA_JF_PASS_THROUGH
+		 || ipa_get_jf_pass_through_formal_id (jfunc) == ref_index);
+	      gcc_checking_assert
+		(jfunc->type != IPA_JF_ANCESTOR
+		 || ipa_get_jf_ancestor_formal_id (jfunc) == ref_index);
+	    }
+	}
+
+      /* TODO: We should allow aggregate jump functions even for these types of
+	 jump functions but we need to be able to combine them first.  */
+      if (jfunc->type != IPA_JF_PASS_THROUGH
+	  && jfunc->type != IPA_JF_ANCESTOR
 	  && (AGGREGATE_TYPE_P (TREE_TYPE (arg))
 	      || POINTER_TYPE_P (param_type)))
 	determine_known_aggregate_parts (call, arg, param_type, jfunc);
@@ -2223,12 +2514,11 @@ ipa_analyze_stmt_uses (struct func_body_
     ipa_analyze_call_uses (fbi, stmt);
 }
 
-/* Callback of walk_stmt_load_store_addr_ops for the visit_load.
-   If OP is a parameter declaration, mark it as used in the info structure
-   passed in DATA.  */
+/* Callback of walk_stmt_load_store_addr_ops.  If OP is a parameter
+   declaration, mark it as used in the info structure passed in DATA.  */
 
 static bool
-visit_ref_for_mod_analysis (gimple, tree op, tree, void *data)
+visit_ref_mark_it_used (gimple, tree op, tree, void *data)
 {
   struct ipa_node_params *info = (struct ipa_node_params *) data;
 
@@ -2244,13 +2534,12 @@ visit_ref_for_mod_analysis (gimple, tree
   return false;
 }
 
-/* Scan the statements in BB and inspect the uses of formal parameters.  Store
-   the findings in various structures of the associated ipa_node_params
-   structure, such as parameter flags, notes etc.  FBI holds various data about
-   the function being analyzed.  */
+/* Scan the statements in BB and inspect the uses of formal parameters, escape
+   analysis and so on.  FBI holds various data about the function being
+   analyzed.  */
 
 static void
-ipa_analyze_params_uses_in_bb (struct func_body_info *fbi, basic_block bb)
+ipa_analyze_bb_statements (struct func_body_info *fbi, basic_block bb)
 {
   gimple_stmt_iterator gsi;
   for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -2262,15 +2551,15 @@ ipa_analyze_params_uses_in_bb (struct fu
 
       ipa_analyze_stmt_uses (fbi, stmt);
       walk_stmt_load_store_addr_ops (stmt, fbi->info,
-				     visit_ref_for_mod_analysis,
-				     visit_ref_for_mod_analysis,
-				     visit_ref_for_mod_analysis);
+				     visit_ref_mark_it_used,
+				     visit_ref_mark_it_used,
+				     visit_ref_mark_it_used);
     }
   for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi))
     walk_stmt_load_store_addr_ops (gsi_stmt (gsi), fbi->info,
-				   visit_ref_for_mod_analysis,
-				   visit_ref_for_mod_analysis,
-				   visit_ref_for_mod_analysis);
+				   visit_ref_mark_it_used,
+				   visit_ref_mark_it_used,
+				   visit_ref_mark_it_used);
 }
 
 /* Calculate controlled uses of parameters of NODE.  */
@@ -2344,10 +2633,284 @@ private:
 void
 analysis_dom_walker::before_dom_children (basic_block bb)
 {
-  ipa_analyze_params_uses_in_bb (m_fbi, bb);
+  ipa_analyze_bb_statements (m_fbi, bb);
   ipa_compute_jump_functions_for_bb (m_fbi, bb);
 }
 
+/* Look at operands of PHI and if any of them is an address of a declaration,
+   mark that declaration escaped.  */
+
+void
+analyze_phi_escapes (gimple phi, struct func_body_info *fbi)
+{
+  for (unsigned i = 0; i < gimple_phi_num_args (phi); ++i)
+    {
+      tree op = gimple_phi_arg_def (phi, i);
+      if (TREE_CODE (op) != ADDR_EXPR)
+	continue;
+
+      tree base = get_base_address (TREE_OPERAND (op, 0));
+      if (!DECL_P (base))
+	continue;
+
+      ipa_escape **d_esc = fbi->decl_escapes->contains (base);
+      if (!d_esc)
+	continue;
+      (*d_esc)->escaped = true;
+    }
+}
+
+/* Check definition and uses of SSA and update ESC (and potentially escape
+   structures associated with other SSA names) accordingly.  */
+
+static void
+analyze_ssa_escape (struct func_body_info *fbi, tree ssa,
+		    struct ipa_escape *esc)
+{
+  esc->analyzed = true;
+  if (!POINTER_TYPE_P (TREE_TYPE (ssa)))
+    {
+      esc->escaped = true;
+      return;
+    }
+
+  /* First we need to check the definition and figure out whether we can work
+     with it or whether this name actually refers to data described by another
+     structure.  */
+  if (!SSA_NAME_IS_DEFAULT_DEF (ssa))
+    {
+      gimple def = SSA_NAME_DEF_STMT (ssa);
+
+      if (gimple_assign_single_p (def))
+	{
+	  tree rhs = gimple_assign_rhs1 (def);
+	  HOST_WIDE_INT offset;
+	  struct ipa_escape *r_esc = get_escape_for_value (fbi, rhs, &offset);
+	  if (r_esc)
+	    {
+	      esc->offset = offset;
+	      esc->target = r_esc;
+	    }
+	  else
+	    {
+	      esc->escaped = true;
+	      return;
+	    }
+	}
+      else if (is_gimple_call (def))
+	{
+	  /* TODO: If only C++ new had malloc attribute.  */
+	  int flags = gimple_call_flags (def);
+	  if ((flags & ECF_MALLOC) == 0)
+	    {
+	      esc->escaped = true;
+	      return;
+	    }
+	}
+      else
+	{
+	  if (gimple_code (def) == GIMPLE_PHI)
+	    /* Any SSA defined by a PHI is doomed but it is a convenient place
+	       to check every pointer phi . */
+	    analyze_phi_escapes (def, fbi);
+
+	  esc->escaped = true;
+	  return;
+	}
+    }
+
+  if (esc->target)
+    esc = esc->target;
+  if (esc->escaped)
+    return;
+
+  /* If the definition is fine, we need to check the uses.  */
+
+  imm_use_iterator imm_iter;
+  use_operand_p use;
+  FOR_EACH_IMM_USE_FAST (use, imm_iter, ssa)
+    {
+      gimple stmt = USE_STMT (use);
+      if (is_gimple_debug (stmt))
+	continue;
+
+      switch (gimple_code (stmt))
+	{
+	case GIMPLE_ASSIGN:
+	  {
+	    if (!gimple_assign_single_p (stmt))
+	      {
+		esc->escaped = true;
+		return;
+	      }
+
+	      tree lhs = gimple_assign_lhs (stmt);
+	      /* Statements assigning to another SSA are OK, we check all of
+		 them.  */
+	      if (TREE_CODE (lhs) != SSA_NAME
+		  /* If LHS is not an SSA_NAME, RHS cannot be an ADDR_EXPR, and
+		     must be either a naked SSA_NAME or a load or an invariant.
+		     We only care if it is the SSA name we are after.  It can
+		     be a different SSA name if the use was on the LHS in a
+		     MEM_REF.  */
+		  && gimple_assign_rhs1 (stmt) == ssa)
+		{
+		  esc->escaped = true;
+		  return;
+		}
+
+	      while (handled_component_p (lhs))
+		lhs = TREE_OPERAND (lhs, 0);
+	      if (TREE_CODE (lhs) == MEM_REF
+		  && TREE_OPERAND (lhs, 0) == ssa)
+		esc->write_base = true;
+	    }
+	  break;
+
+	case GIMPLE_CALL:
+	  /* Calls will be dealt with when constructing jump functions.
+	     However, indirect calls mean that all values escape (we do IPA
+	     escape propagation before any devirtualization) and when not in
+	     LTO, even calls to functions in other compilation units are dark
+	     holes.  On the other hand, builtin free is whitelisted.  */
+	  if (!gimple_call_builtin_p (stmt, BUILT_IN_FREE))
+	    {
+	      struct cgraph_edge *cs = cgraph_edge (fbi->node, stmt);
+	      if (!cs || !cs->callee || (!cs->callee->definition && !flag_lto))
+		{
+		  esc->escaped = true;
+		  return;
+		}
+	    }
+	  break;
+
+	case GIMPLE_SWITCH:
+	case GIMPLE_COND:
+	  /* These are harmless.  */
+	  break;
+
+	default:
+	  esc->escaped = true;
+	  return;
+	}
+    }
+}
+
+/* Examine escapes of all SSA names.   */
+
+static void
+analyze_all_ssa_escapes (struct func_body_info *fbi)
+{
+  for (unsigned i = 1; i < fbi->func->gimple_df->ssa_names->length (); ++i)
+    {
+      tree ssa = ssa_name (i);
+      if (!ssa)
+	continue;
+      struct ipa_escape *esc = &fbi->escapes[SSA_NAME_VERSION (ssa)];
+      if (esc->analyzed)
+	return;
+      analyze_ssa_escape (fbi, ssa, esc);
+    }
+}
+
+/* Initialize escape analysis structures in the FBI corresponding to FUNC.  */
+
+static void
+create_escape_structures (struct func_body_info *fbi)
+{
+  tree var, parm;
+  unsigned int i, var_idx, var_count = 0;
+
+  for (parm = DECL_ARGUMENTS (fbi->node->decl);
+       parm;
+       parm = DECL_CHAIN (parm))
+    if (TREE_ADDRESSABLE (parm))
+      var_count++;
+
+  FOR_EACH_LOCAL_DECL (fbi->func, i, var)
+    if (TREE_CODE (var) == VAR_DECL && TREE_ADDRESSABLE (var))
+      var_count++;
+
+  fbi->escapes = vNULL;
+  fbi->escapes.safe_grow_cleared (SSANAMES (fbi->func)->length () + var_count);
+  fbi->decl_escapes = new pointer_map <ipa_escape *>;
+
+  var_idx = SSANAMES (fbi->func)->length ();
+  for (parm = DECL_ARGUMENTS (fbi->node->decl);
+       parm;
+       parm = DECL_CHAIN (parm))
+    if (TREE_ADDRESSABLE (parm))
+      *fbi->decl_escapes->insert (parm) = &fbi->escapes[var_idx++];
+
+  FOR_EACH_LOCAL_DECL (fbi->func, i, var)
+    if (TREE_CODE (var) == VAR_DECL && TREE_ADDRESSABLE (var))
+      *fbi->decl_escapes->insert (var) = &fbi->escapes[var_idx++];
+}
+
+/* Free escape analysis structures in the FBI.  */
+
+static void
+free_escape_structures (struct func_body_info *fbi)
+{
+  fbi->escapes.release ();
+  delete fbi->decl_escapes;
+}
+
+/* Go over call argument of CS and if any warrants a result_index for an escape
+   structure, assign to it *RI and increment it.  */
+
+void
+pick_escapes_from_call (struct func_body_info *fbi, struct cgraph_edge *cs,
+			int *ri)
+{
+  int arg_num = gimple_call_num_args (cs->call_stmt);
+
+  for (int i = 0; i < arg_num; ++i)
+    {
+      HOST_WIDE_INT offset;
+      tree arg = gimple_call_arg (cs->call_stmt, i);
+      struct ipa_escape *esc = get_escape_for_value (fbi, arg, &offset);
+
+      if (!esc || esc->escaped)
+	continue;
+
+      if (esc->last_seen_cs == cs)
+	{
+	  esc->escaped = true;
+	  continue;
+	}
+      esc->last_seen_cs = cs;
+
+      if (!esc->result_index)
+	{
+	  *ri = *ri + 1;
+	  esc->result_index = *ri;
+	}
+    }
+}
+
+/* Copy result escape flags to node info.  There must be exactly COUNT result
+   escapes.  */
+
+void
+gather_picked_escapes (struct func_body_info *fbi, int count)
+{
+  if (count == 0)
+    return;
+  fbi->info->ref_descs.safe_grow_cleared (count);
+
+  for (unsigned i = 0; i < fbi->escapes.length (); ++i)
+    {
+      struct ipa_escape *esc = &fbi->escapes[i];
+      int idx;
+      if (valid_escape_result_index (esc, &idx))
+	{
+	  ipa_set_ref_escaped (fbi->info, idx, esc->escaped);
+	  ipa_set_ref_clobbered (fbi->info, idx, esc->write_base);
+	}
+    }
+}
+
 /* Initialize the array describing properties of of formal parameters
    of NODE, analyze their uses and compute jump functions associated
    with actual arguments of calls from within NODE.  */
@@ -2381,28 +2944,48 @@ ipa_analyze_node (struct cgraph_node *no
   calculate_dominance_info (CDI_DOMINATORS);
   ipa_initialize_node_params (node);
   ipa_analyze_controlled_uses (node);
+  info->ref_descs = vNULL;
 
+  fbi.func = func;
   fbi.node = node;
   fbi.info = IPA_NODE_REF (node);
   fbi.bb_infos = vNULL;
   fbi.bb_infos.safe_grow_cleared (last_basic_block_for_fn (cfun));
   fbi.param_count = ipa_get_param_count (info);
   fbi.aa_walked = 0;
+  create_escape_structures (&fbi);
+  analyze_all_ssa_escapes (&fbi);
 
+  for (int i = 0; i < fbi.param_count; ++i)
+    {
+      tree ddef, parm = fbi.info->descriptors[i].decl;
+      if (is_gimple_reg (parm)
+	  && (ddef = ssa_default_def (cfun, parm)))
+	{
+	  struct ipa_escape *esc = &fbi.escapes[SSA_NAME_VERSION (ddef)];
+	  esc->result_index = i + 1;
+	}
+    }
+
+  int ri = fbi.param_count;
   for (struct cgraph_edge *cs = node->callees; cs; cs = cs->next_callee)
     {
       ipa_bb_info *bi = ipa_get_bb_info (&fbi, gimple_bb (cs->call_stmt));
       bi->cg_edges.safe_push (cs);
+      pick_escapes_from_call (&fbi, cs, &ri);
     }
 
   for (struct cgraph_edge *cs = node->indirect_calls; cs; cs = cs->next_callee)
     {
       ipa_bb_info *bi = ipa_get_bb_info (&fbi, gimple_bb (cs->call_stmt));
       bi->cg_edges.safe_push (cs);
+      pick_escapes_from_call (&fbi, cs, &ri);
     }
 
+  gather_picked_escapes (&fbi, ri);
   analysis_dom_walker (&fbi).walk (ENTRY_BLOCK_PTR_FOR_FN (cfun));
 
+  free_escape_structures (&fbi);
   int i;
   struct ipa_bb_info *bi;
   FOR_EACH_VEC_ELT (fbi.bb_infos, i, bi)
@@ -2412,6 +2995,271 @@ ipa_analyze_node (struct cgraph_node *no
   pop_cfun ();
 }
 
+/* Data about the current status of escape propagation. */
+
+struct escape_spreading_data
+{
+  /* To-do lists for escape spreading.  */
+  vec<cgraph_node *> up_stack;
+  vec<cgraph_node *> down_stack;
+
+  /* The current info coresponding to the node from which we are spreading
+     escaped flags.  */
+  struct ipa_node_params *info;
+};
+
+/* Put the NODE into the upward propagation work list in ESD, unless it is
+   already there.  */
+
+static void
+enque_to_propagate_escapes_up (struct escape_spreading_data *esd,
+			       struct cgraph_node *node)
+{
+  struct ipa_node_params *info = IPA_NODE_REF (node);
+  if (info->node_up_enqueued)
+    return;
+  info->node_up_enqueued = true;
+  esd->up_stack.safe_push (node);
+}
+
+/* Put the NODE into the downward propagation work list in ESD, unless it is
+   already there.  */
+
+static void
+enque_to_propagate_escapes_down (struct escape_spreading_data *esd,
+				 struct cgraph_node *node)
+{
+  struct ipa_node_params *info = IPA_NODE_REF (node);
+  if (info->node_enqueued)
+    return;
+  info->node_enqueued = true;
+  esd->down_stack.safe_push (node);
+}
+
+/* Return the escape origin from a JFUNC regardless of its type, or -1 if there
+   is none.  */
+
+static int
+escape_origin_from_jfunc (struct ipa_jump_func *jfunc)
+{
+  if (jfunc->type == IPA_JF_PASS_THROUGH)
+    return ipa_get_jf_pass_through_formal_id (jfunc);
+  else if (jfunc->type == IPA_JF_ANCESTOR)
+    return ipa_get_jf_ancestor_formal_id (jfunc);
+  else if (jfunc->type == IPA_JF_UNKNOWN
+	   && ipa_get_jf_unknown_esc_ref_valid (jfunc))
+    return ipa_get_jf_unknown_esc_ref_index (jfunc);
+  else if (jfunc->type == IPA_JF_KNOWN_TYPE
+	   && ipa_get_jf_known_type_esc_ref_valid (jfunc))
+    return ipa_get_jf_known_type_esc_ref_index (jfunc);
+  else if (jfunc->type == IPA_JF_CONST
+	   && ipa_get_jf_constant_esc_ref_valid (jfunc))
+    return ipa_get_jf_constant_esc_ref_index (jfunc);
+  else
+    return -1;
+}
+
+/* Callback of cgraph_for_node_and_aliases, spread escpe flags to callers.  */
+
+static bool
+spread_escapes_up_from_one_alias (struct cgraph_node *node, void *data)
+{
+  struct escape_spreading_data *esd = (struct escape_spreading_data *) data;
+  struct cgraph_edge *cs;
+
+  for (cs = node->callers; cs; cs = cs->next_caller)
+    {
+      if (cs->caller->thunk.thunk_p)
+	{
+	  cgraph_for_node_and_aliases (cs->caller,
+				       spread_escapes_up_from_one_alias,
+				       esd, true);
+	  continue;
+	}
+      enum availability avail;
+      cgraph_function_or_thunk_node (node, &avail);
+
+      struct ipa_node_params *caller_info = IPA_NODE_REF (cs->caller);
+      struct ipa_edge_args *args = IPA_EDGE_REF (cs);
+      int args_count = ipa_get_cs_argument_count (args);
+      int param_count = ipa_get_param_count (esd->info);
+
+      for (int i = 0; i < args_count; ++i)
+	if (i >= param_count
+	    || ipa_is_ref_escaped (esd->info, i)
+	    || avail == AVAIL_OVERWRITABLE)
+	  {
+	    struct ipa_jump_func *jfunc = ipa_get_ith_jump_func (args, i);
+	    int origin = escape_origin_from_jfunc (jfunc);
+	    if (origin < 0)
+	      continue;
+
+	    if (!ipa_is_ref_escaped (caller_info, origin))
+	      {
+		if (dump_file && (dump_flags & TDF_DETAILS))
+		  fprintf (dump_file, "escape propagated up (%i) from %s/%i to "
+			   "%s/%i ref %i, from arg %i\n", __LINE__,
+			   node->name (), node->order, cs->caller->name (),
+			   cs->caller->order, origin, i);
+
+		ipa_set_ref_escaped (caller_info, origin, true);
+		enque_to_propagate_escapes_up (esd, cs->caller);
+		enque_to_propagate_escapes_down (esd, cs->caller);
+	      }
+	  }
+	else if (ipa_is_ref_clobbered (esd->info, i))
+	  {
+	    struct ipa_jump_func *jfunc = ipa_get_ith_jump_func (args, i);
+	    int origin = escape_origin_from_jfunc (jfunc);
+	    if (origin < 0)
+	      continue;
+
+	    ipa_set_ref_callee_clobbered (caller_info, origin, true);
+	    if (!ipa_is_ref_clobbered (caller_info, origin))
+	      {
+		if (dump_file && (dump_flags & TDF_DETAILS))
+		  fprintf (dump_file, "clobbered propagated up (%i) from "
+			   "%s/%i to %s/%i ref %i, from arg %i\n", __LINE__,
+			   node->name (), node->order, cs->caller->name (),
+			   cs->caller->order, origin, i);
+
+		ipa_set_ref_clobbered (caller_info, origin, true);
+		enque_to_propagate_escapes_up (esd, cs->caller);
+	      }
+	  }
+    }
+  return false;
+}
+
+/* Spread set escape flags from ESD->node and all its aliases and thunks to
+   their callers.  */
+
+static void
+spread_escapes_up (struct escape_spreading_data *esd, cgraph_node *node)
+{
+  cgraph_for_node_and_aliases (node, spread_escapes_up_from_one_alias,
+			       esd, true);
+}
+
+/* Spread set escape flags from ESD->node to all its callees.   */
+
+static void
+spread_escapes_down (struct escape_spreading_data *esd, cgraph_node *node)
+{
+  struct cgraph_edge *cs;
+  for (cs = node->callees; cs; cs = cs->next_callee)
+    {
+      enum availability availability;
+      cgraph_node *callee = cgraph_function_node (cs->callee, &availability);
+
+      struct ipa_node_params *callee_info = IPA_NODE_REF (callee);
+      struct ipa_edge_args *args = IPA_EDGE_REF (cs);
+      int args_count = ipa_get_cs_argument_count (args);
+      int parms_count = ipa_get_param_count (callee_info);
+
+      for (int i = 0; i < parms_count; ++i)
+	if (i >= args_count)
+	  {
+	    if (!ipa_is_ref_escaped (callee_info, i))
+	      {
+		if (dump_file && (dump_flags & TDF_DETAILS))
+		  fprintf (dump_file, "escape propagated down (%i) from %s/%i "
+			   " to %s/%i ref %i\n", __LINE__, node->name (),
+			   node->order, callee->name (), callee->order, i);
+
+		ipa_set_ref_escaped (callee_info, i, true);
+		enque_to_propagate_escapes_down (esd, callee);
+	      }
+	  }
+	else
+	  {
+	    struct ipa_jump_func *jfunc = ipa_get_ith_jump_func (args, i);
+	    int origin = escape_origin_from_jfunc (jfunc);
+
+	    if ((origin < 0
+		 || ipa_is_ref_escaped (esd->info, origin))
+		&& !ipa_is_ref_escaped (callee_info, i))
+	      {
+		if (dump_file && (dump_flags & TDF_DETAILS))
+		  fprintf (dump_file, "escape propagated down (%i) from %s/%i "
+			   " to %s/%i ref %i, origin %i\n", __LINE__,
+			   node->name (), node->order, callee->name (),
+			   callee->order, i, origin);
+
+		ipa_set_ref_escaped (callee_info, i, true);
+		enque_to_propagate_escapes_down (esd, callee);
+	      }
+	  }
+    }
+}
+
+/* Spread escape flags through jump functions accross the call graph.  */
+
+void
+ipa_spread_escapes ()
+{
+  struct cgraph_node *node;
+  struct escape_spreading_data esd;
+  esd.up_stack = vNULL;
+  esd.down_stack = vNULL;
+
+  if (dump_file)
+    fprintf (dump_file, "\nPropagating escape flags\n");
+
+  ipa_check_create_node_params ();
+  ipa_check_create_edge_args ();
+  FOR_EACH_FUNCTION (node)
+    {
+      struct ipa_node_params *info = IPA_NODE_REF (node);
+      esd.info = info;
+      /* FIXME: This test is copied from IPA-CP but I wonder whether we
+	 should check it for all aliases too?  */
+      if (!node->local.local)
+	{
+	  /* Set escape flags corresponding to formal parameters.  */
+	  int param_count = ipa_get_param_count (esd.info);
+	  for (int i = 0; i < param_count; ++i)
+	    ipa_set_ref_escaped (info, i, true);
+	}
+
+      spread_escapes_up (&esd, node);
+      spread_escapes_down (&esd, node);
+    }
+
+  while (!esd.up_stack.is_empty ())
+    {
+      node = esd.up_stack.pop ();
+      esd.info = IPA_NODE_REF (node);
+      esd.info->node_up_enqueued = false;
+      spread_escapes_up (&esd, node);
+    }
+
+  while (!esd.down_stack.is_empty ())
+    {
+      node = esd.down_stack.pop ();
+      esd.info = IPA_NODE_REF (node);
+      esd.info->node_enqueued = false;
+      spread_escapes_down (&esd, node);
+    }
+
+  esd.up_stack.release ();
+  esd.down_stack.release ();
+
+  FOR_EACH_FUNCTION (node)
+    {
+      struct ipa_node_params *info = IPA_NODE_REF (node);
+      int param_count = ipa_get_param_count (info);
+
+      for (int i = 0; i < param_count; i++)
+	if (!ipa_is_ref_escaped (info, i))
+	  {
+	    cgraph_set_param_noescape (node, i);
+	    if (!ipa_is_ref_clobbered (info, i))
+	      cgraph_set_param_noclobber (node, i);
+	  }
+    }
+}
+
 /* Given a statement CALL which must be a GIMPLE_CALL calling an OBJ_TYPE_REF
    attempt a type-based devirtualization.  If successful, return the
    target function declaration, otherwise return NULL.  */
@@ -2423,7 +3271,7 @@ ipa_intraprocedural_devirtualization (gi
   struct ipa_jump_func jfunc;
   tree otr = gimple_call_fn (call);
 
-  jfunc.type = IPA_JF_UNKNOWN;
+  ipa_set_jf_unknown (&jfunc);
   compute_known_type_jump_func (OBJ_TYPE_REF_OBJECT (otr), &jfunc,
 				call, obj_type_ref_class (otr));
   if (jfunc.type != IPA_JF_KNOWN_TYPE)
@@ -2442,30 +3290,53 @@ ipa_intraprocedural_devirtualization (gi
   return fndecl;
 }
 
+/* Set DST to be unknown jump function.  if SRC, which must be known type jump
+   function, has a valid reference index, copy that index to DST, otherwise
+   keep DST's ref index invalid.  */
+
+static void
+make_unknown_jf_from_known_type_jf (struct ipa_jump_func *dst,
+				    struct ipa_jump_func *src)
+{
+  ipa_set_jf_unknown (dst);
+  if (ipa_get_jf_known_type_esc_ref_valid (src))
+    ipa_set_jf_unknown_ref_index (dst,
+				  ipa_get_jf_known_type_esc_ref_index (src));
+}
+
 /* Update the jump function DST when the call graph edge corresponding to SRC is
    is being inlined, knowing that DST is of type ancestor and src of known
    type.  */
 
 static void
-combine_known_type_and_ancestor_jfs (struct ipa_jump_func *src,
-				     struct ipa_jump_func *dst)
+combine_known_type_and_ancestor_jfs (struct ipa_jump_func *dst,
+				     struct ipa_jump_func *src)
 {
-  HOST_WIDE_INT combined_offset;
-  tree combined_type;
-
   if (!ipa_get_jf_ancestor_type_preserved (dst))
     {
-      dst->type = IPA_JF_UNKNOWN;
+      make_unknown_jf_from_known_type_jf (dst, src);
       return;
     }
 
-  combined_offset = ipa_get_jf_known_type_offset (src)
+  bool esc_ref_valid;
+  int  esc_ref_index = -1;
+  if (ipa_get_jf_known_type_esc_ref_valid (src))
+    {
+      esc_ref_valid = true;
+      esc_ref_index = ipa_get_jf_known_type_esc_ref_index (src);
+    }
+  else
+    esc_ref_valid = false;
+
+  HOST_WIDE_INT combined_offset = ipa_get_jf_known_type_offset (src)
     + ipa_get_jf_ancestor_offset (dst);
-  combined_type = ipa_get_jf_ancestor_type (dst);
+  tree combined_type = ipa_get_jf_ancestor_type (dst);
 
   ipa_set_jf_known_type (dst, combined_offset,
 			 ipa_get_jf_known_type_base_type (src),
 			 combined_type);
+  if (esc_ref_valid)
+    ipa_set_jf_known_type_ref_index (dst, esc_ref_index);
 }
 
 /* Update the jump functions associated with call graph edge E when the call
@@ -2478,6 +3349,7 @@ update_jump_functions_after_inlining (st
 {
   struct ipa_edge_args *top = IPA_EDGE_REF (cs);
   struct ipa_edge_args *args = IPA_EDGE_REF (e);
+  struct ipa_node_params *old_info = IPA_NODE_REF (cs->callee);
   int count = ipa_get_cs_argument_count (args);
   int i;
 
@@ -2495,14 +3367,16 @@ update_jump_functions_after_inlining (st
 	     don't.  */
 	  if (dst_fid >= ipa_get_cs_argument_count (top))
 	    {
-	      dst->type = IPA_JF_UNKNOWN;
+	      ipa_set_jf_unknown (dst);
 	      continue;
 	    }
 
 	  src = ipa_get_ith_jump_func (top, dst_fid);
 
 	  if (src->agg.items
-	      && (dst->value.ancestor.agg_preserved || !src->agg.by_ref))
+	      && (dst->value.ancestor.agg_preserved
+		  || !src->agg.by_ref
+		  || ipa_is_param_ref_safely_constant (old_info, dst_fid)))
 	    {
 	      struct ipa_agg_jf_item *item;
 	      int j;
@@ -2518,7 +3392,7 @@ update_jump_functions_after_inlining (st
 	    }
 
 	  if (src->type == IPA_JF_KNOWN_TYPE)
-	    combine_known_type_and_ancestor_jfs (src, dst);
+	    combine_known_type_and_ancestor_jfs (dst, src);
 	  else if (src->type == IPA_JF_PASS_THROUGH
 		   && src->value.pass_through.operation == NOP_EXPR)
 	    {
@@ -2538,7 +3412,7 @@ update_jump_functions_after_inlining (st
 		src->value.ancestor.type_preserved;
 	    }
 	  else
-	    dst->type = IPA_JF_UNKNOWN;
+	    ipa_set_jf_unknown (dst);
 	}
       else if (dst->type == IPA_JF_PASS_THROUGH)
 	{
@@ -2552,20 +3426,19 @@ update_jump_functions_after_inlining (st
 	      int dst_fid = dst->value.pass_through.formal_id;
 	      src = ipa_get_ith_jump_func (top, dst_fid);
 	      bool dst_agg_p = ipa_get_jf_pass_through_agg_preserved (dst);
+	      bool pass_aggs_by_ref = dst_agg_p
+		|| ipa_is_param_ref_safely_constant (old_info, dst_fid);
 
 	      switch (src->type)
 		{
 		case IPA_JF_UNKNOWN:
-		  dst->type = IPA_JF_UNKNOWN;
+		  ipa_set_jf_unknown_copy (dst, src);
 		  break;
 		case IPA_JF_KNOWN_TYPE:
 		  if (ipa_get_jf_pass_through_type_preserved (dst))
-		    ipa_set_jf_known_type (dst,
-					   ipa_get_jf_known_type_offset (src),
-					   ipa_get_jf_known_type_base_type (src),
-					   ipa_get_jf_known_type_component_type (src));
+		    ipa_set_jf_known_type_copy (dst, src);
 		  else
-		    dst->type = IPA_JF_UNKNOWN;
+		    make_unknown_jf_from_known_type_jf (dst, src);
 		  break;
 		case IPA_JF_CONST:
 		  ipa_set_jf_cst_copy (dst, src);
@@ -2614,7 +3487,7 @@ update_jump_functions_after_inlining (st
 		}
 
 	      if (src->agg.items
-		  && (dst_agg_p || !src->agg.by_ref))
+		  && (pass_aggs_by_ref || !src->agg.by_ref))
 		{
 		  /* Currently we do not produce clobber aggregate jump
 		     functions, replace with merging when we do.  */
@@ -2625,7 +3498,7 @@ update_jump_functions_after_inlining (st
 		}
 	    }
 	  else
-	    dst->type = IPA_JF_UNKNOWN;
+	    ipa_set_jf_unknown (dst);
 	}
     }
 }
@@ -2975,11 +3848,12 @@ update_indirect_edges_after_inlining (st
 {
   struct ipa_edge_args *top;
   struct cgraph_edge *ie, *next_ie, *new_direct_edge;
-  struct ipa_node_params *new_root_info;
+  struct ipa_node_params *new_root_info, *old_root_info;
   bool res = false;
 
   ipa_check_create_edge_args ();
   top = IPA_EDGE_REF (cs);
+  old_root_info = IPA_NODE_REF (cs->callee);
   new_root_info = IPA_NODE_REF (cs->caller->global.inlined_to
 				? cs->caller->global.inlined_to
 				: cs->caller);
@@ -3039,6 +3913,7 @@ update_indirect_edges_after_inlining (st
 	       && ipa_get_jf_pass_through_operation (jfunc) == NOP_EXPR)
 	{
 	  if ((ici->agg_contents
+	       && !ipa_is_param_ref_safely_constant (old_root_info, param_index)
 	       && !ipa_get_jf_pass_through_agg_preserved (jfunc))
 	      || (ici->polymorphic
 		  && !ipa_get_jf_pass_through_type_preserved (jfunc)))
@@ -3049,6 +3924,7 @@ update_indirect_edges_after_inlining (st
       else if (jfunc->type == IPA_JF_ANCESTOR)
 	{
 	  if ((ici->agg_contents
+	       && !ipa_is_param_ref_safely_constant (old_root_info, param_index)
 	       && !ipa_get_jf_ancestor_agg_preserved (jfunc))
 	      || (ici->polymorphic
 		  && !ipa_get_jf_ancestor_type_preserved (jfunc)))
@@ -3286,6 +4162,7 @@ void
 ipa_free_node_params_substructures (struct ipa_node_params *info)
 {
   info->descriptors.release ();
+  info->ref_descs.release ();
   free (info->lattices);
   /* Lattice values and their sources are deallocated with their alocation
      pool.  */
@@ -3461,6 +4338,7 @@ ipa_node_duplication_hook (struct cgraph
   new_info = IPA_NODE_REF (dst);
 
   new_info->descriptors = old_info->descriptors.copy ();
+  new_info->ref_descs = old_info->ref_descs.copy ();
   new_info->lattices = NULL;
   new_info->ipcp_orig_node = old_info->ipcp_orig_node;
 
@@ -3577,7 +4455,7 @@ ipa_free_all_structures_after_iinln (voi
 void
 ipa_print_node_params (FILE *f, struct cgraph_node *node)
 {
-  int i, count;
+  unsigned count;
   struct ipa_node_params *info;
 
   if (!node->definition)
@@ -3586,7 +4464,7 @@ ipa_print_node_params (FILE *f, struct c
   fprintf (f, "  function  %s/%i parameter descriptors:\n",
 	   node->name (), node->order);
   count = ipa_get_param_count (info);
-  for (i = 0; i < count; i++)
+  for (unsigned i = 0; i < count; i++)
     {
       int c;
 
@@ -3598,7 +4476,38 @@ ipa_print_node_params (FILE *f, struct c
       if (c == IPA_UNDESCRIBED_USE)
 	fprintf (f, " undescribed_use");
       else
-	fprintf (f, "  controlled_uses=%i", c);
+	fprintf (f, " controlled_uses=%i", c);
+      if (ipa_get_tracked_refs_count (info) > 0)
+	{
+	  if (ipa_is_ref_escaped (info, i))
+	    fprintf (f, " escaped");
+	  else
+	    fprintf (f, " not_esc %s %s",
+		     ipa_is_ref_clobbered (info, i) ? "clobber" : "not_clobber",
+		     ipa_is_ref_callee_clobbered (info, i) ? "call_clobber"
+		     : "not_call_clobber");
+	}
+      fprintf (f, "\n");
+    }
+
+  if ((unsigned) ipa_get_tracked_refs_count (info) > count)
+    {
+      fprintf (f, "   The rest of reference escaped flags: ");
+      bool first = true;
+      for (int i = count; i < ipa_get_tracked_refs_count (info); ++i)
+	{
+	  if (!first)
+	    fprintf (f, ", ");
+	  else
+	    first = false;
+	  if (ipa_is_ref_escaped (info, i))
+	    fprintf (f, "%i: esc", i);
+	  else
+	    fprintf (f, "%i: not_esc %s %s", i,
+		     ipa_is_ref_clobbered (info, i) ? "clobber" : "not_clobber",
+		     ipa_is_ref_callee_clobbered (info, i) ? "call_clobber"
+		     : "not_call_clobber");
+	}
       fprintf (f, "\n");
     }
 }
@@ -4378,16 +5287,34 @@ ipa_write_jump_function (struct output_b
   switch (jump_func->type)
     {
     case IPA_JF_UNKNOWN:
+      bp = bitpack_create (ob->main_stream);
+      bp_pack_value (&bp, ipa_get_jf_unknown_esc_ref_valid (jump_func), 1);
+      streamer_write_bitpack (&bp);
+      if (ipa_get_jf_unknown_esc_ref_valid (jump_func))
+	streamer_write_uhwi (ob, ipa_get_jf_unknown_esc_ref_index (jump_func));
       break;
     case IPA_JF_KNOWN_TYPE:
       streamer_write_uhwi (ob, jump_func->value.known_type.offset);
       stream_write_tree (ob, jump_func->value.known_type.base_type, true);
       stream_write_tree (ob, jump_func->value.known_type.component_type, true);
+      bp = bitpack_create (ob->main_stream);
+      bp_pack_value (&bp, ipa_get_jf_known_type_esc_ref_valid (jump_func), 1);
+      streamer_write_bitpack (&bp);
+      if (ipa_get_jf_known_type_esc_ref_valid (jump_func))
+	streamer_write_uhwi (ob,
+			     ipa_get_jf_known_type_esc_ref_index (jump_func));
       break;
     case IPA_JF_CONST:
       gcc_assert (
 	  EXPR_LOCATION (jump_func->value.constant.value) == UNKNOWN_LOCATION);
       stream_write_tree (ob, jump_func->value.constant.value, true);
+
+      bp = bitpack_create (ob->main_stream);
+      bp_pack_value (&bp, ipa_get_jf_constant_esc_ref_valid (jump_func), 1);
+      streamer_write_bitpack (&bp);
+      if (ipa_get_jf_constant_esc_ref_valid (jump_func))
+	streamer_write_uhwi (ob,
+			     ipa_get_jf_constant_esc_ref_index (jump_func));
       break;
     case IPA_JF_PASS_THROUGH:
       streamer_write_uhwi (ob, jump_func->value.pass_through.operation);
@@ -4448,19 +5375,44 @@ ipa_read_jump_function (struct lto_input
   switch (jftype)
     {
     case IPA_JF_UNKNOWN:
-      jump_func->type = IPA_JF_UNKNOWN;
+      {
+	ipa_set_jf_unknown (jump_func);
+	struct bitpack_d bp = streamer_read_bitpack (ib);
+	bool esc_ref_valid = bp_unpack_value (&bp, 1);
+	if (esc_ref_valid)
+	  {
+	    unsigned esc_ref_idx = streamer_read_uhwi (ib);
+	    ipa_set_jf_unknown_ref_index (jump_func, esc_ref_idx);
+	  }
+      }
       break;
     case IPA_JF_KNOWN_TYPE:
       {
 	HOST_WIDE_INT offset = streamer_read_uhwi (ib);
 	tree base_type = stream_read_tree (ib, data_in);
 	tree component_type = stream_read_tree (ib, data_in);
+	struct bitpack_d bp = streamer_read_bitpack (ib);
+	bool esc_ref_valid = bp_unpack_value (&bp, 1);
 
 	ipa_set_jf_known_type (jump_func, offset, base_type, component_type);
+	if (esc_ref_valid)
+	  {
+	    unsigned esc_ref_idx = streamer_read_uhwi (ib);
+	    ipa_set_jf_known_type_ref_index (jump_func, esc_ref_idx);
+	  }
 	break;
       }
     case IPA_JF_CONST:
-      ipa_set_jf_constant (jump_func, stream_read_tree (ib, data_in), cs);
+      {
+	ipa_set_jf_constant (jump_func, stream_read_tree (ib, data_in), cs);
+	struct bitpack_d bp = streamer_read_bitpack (ib);
+	bool esc_ref_valid = bp_unpack_value (&bp, 1);
+	if (esc_ref_valid)
+	  {
+	    unsigned esc_ref_idx = streamer_read_uhwi (ib);
+	    ipa_set_jf_constant_ref_index (jump_func, esc_ref_idx);
+	  }
+      }
       break;
     case IPA_JF_PASS_THROUGH:
       operation = (enum tree_code) streamer_read_uhwi (ib);
@@ -4592,12 +5544,27 @@ ipa_write_node_info (struct output_block
   gcc_assert (info->analysis_done
 	      || ipa_get_param_count (info) == 0);
   gcc_assert (!info->node_enqueued);
+  gcc_assert (!info->node_up_enqueued);
   gcc_assert (!info->ipcp_orig_node);
   for (j = 0; j < ipa_get_param_count (info); j++)
     bp_pack_value (&bp, ipa_is_param_used (info, j), 1);
   streamer_write_bitpack (&bp);
   for (j = 0; j < ipa_get_param_count (info); j++)
     streamer_write_hwi (ob, ipa_get_controlled_uses (info, j));
+
+  streamer_write_uhwi (ob, ipa_get_tracked_refs_count (info));
+  if (ipa_get_tracked_refs_count (info) > 0)
+    {
+      bp = bitpack_create (ob->main_stream);
+      for (int i = 0; i < ipa_get_tracked_refs_count (info); ++i)
+	{
+	  bp_pack_value (&bp, ipa_is_ref_escaped (info, i), 1);
+	  bp_pack_value (&bp, ipa_is_ref_clobbered (info, i), 1);
+	  bp_pack_value (&bp, ipa_is_ref_callee_clobbered (info, i), 1);
+	}
+      streamer_write_bitpack (&bp);
+    }
+
   for (e = node->callees; e; e = e->next_callee)
     {
       struct ipa_edge_args *args = IPA_EDGE_REF (e);
@@ -4632,15 +5599,30 @@ ipa_read_node_info (struct lto_input_blo
 
   for (k = 0; k < ipa_get_param_count (info); k++)
     info->descriptors[k].move_cost = streamer_read_uhwi (ib);
-    
+
   bp = streamer_read_bitpack (ib);
   if (ipa_get_param_count (info) != 0)
     info->analysis_done = true;
   info->node_enqueued = false;
+  info->node_up_enqueued = false;
   for (k = 0; k < ipa_get_param_count (info); k++)
     ipa_set_param_used (info, k, bp_unpack_value (&bp, 1));
   for (k = 0; k < ipa_get_param_count (info); k++)
     ipa_set_controlled_uses (info, k, streamer_read_hwi (ib));
+
+  unsigned ref_count = streamer_read_uhwi (ib);
+  if (ref_count > 0)
+    {
+      bp = streamer_read_bitpack (ib);
+      info->ref_descs.safe_grow_cleared (ref_count);
+      for (unsigned i = 0; i < ref_count; ++i)
+	{
+	  ipa_set_ref_escaped (info, i, bp_unpack_value (&bp, 1));
+	  ipa_set_ref_clobbered (info, i, bp_unpack_value (&bp, 1));
+	  ipa_set_ref_callee_clobbered (info, i, bp_unpack_value (&bp, 1));
+	}
+    }
+
   for (e = node->callees; e; e = e->next_callee)
     {
       struct ipa_edge_args *args = IPA_EDGE_REF (e);
@@ -4830,7 +5812,7 @@ read_agg_replacement_chain (struct lto_i
   unsigned int count, i;
 
   count = streamer_read_uhwi (ib);
-  for (i = 0; i <count; i++)
+  for (i = 0; i < count; i++)
     {
       struct ipa_agg_replacement_value *av;
       struct bitpack_d bp;
@@ -5134,6 +6116,8 @@ ipcp_transform_function (struct cgraph_n
   fbi.bb_infos.safe_grow_cleared (last_basic_block_for_fn (cfun));
   fbi.param_count = param_count;
   fbi.aa_walked = 0;
+  fbi.escapes = vNULL;
+  fbi.decl_escapes = NULL;
 
   descriptors.safe_grow_cleared (param_count);
   ipa_populate_param_decls (node, descriptors);
Index: src/gcc/ipa-prop.h
===================================================================
--- src.orig/gcc/ipa-prop.h
+++ src/gcc/ipa-prop.h
@@ -73,6 +73,17 @@ enum jump_func_type
   IPA_JF_ANCESTOR	    /* represented by field ancestor */
 };
 
+
+/* Structure describing data which are genreally unknown at compile time, yet
+   may have some useful properties.  */
+struct GTY (()) ipa_unknown_data
+{
+  /* If True, the next field contains valid index.  */
+  unsigned escape_ref_valid : 1;
+  /* Index into escaped_ref flags that describe data this refers to.  */
+  unsigned escape_ref_index : 31;
+};
+
 /* Structure holding data required to describe a known type jump function.  */
 struct GTY(()) ipa_known_type_data
 {
@@ -82,6 +93,10 @@ struct GTY(()) ipa_known_type_data
   tree base_type;
   /* Type of the component of the object that is being described.  */
   tree component_type;
+  /* If True, the next field contains valid index.  */
+  unsigned escape_ref_valid : 1;
+  /* Index into escaped_ref flags that describe data this refers to.  */
+  unsigned escape_ref_index : 31;
 };
 
 struct ipa_cst_ref_desc;
@@ -93,6 +108,10 @@ struct GTY(()) ipa_constant_data
   tree value;
   /* Pointer to the structure that describes the reference.  */
   struct ipa_cst_ref_desc GTY((skip)) *rdesc;
+  /* If True, the next field contains valid index.  */
+  unsigned escape_ref_valid : 1;
+  /* Index into escaped_ref flags that describe data this refers to.  */
+  unsigned escape_ref_index : 31;
 };
 
 /* Structure holding data required to describe a pass-through jump function.  */
@@ -187,11 +206,10 @@ struct GTY (()) ipa_jump_func
   struct ipa_agg_jump_function agg;
 
   enum jump_func_type type;
-  /* Represents a value of a jump function.  pass_through is used only in jump
-     function context.  constant represents the actual constant in constant jump
-     functions and member_cst holds constant c++ member functions.  */
+  /* Represents a value of a jump function.  */
   union jump_func_value
   {
+    struct ipa_unknown_data GTY ((tag ("IPA_JF_UNKNOWN"))) unknown;
     struct ipa_known_type_data GTY ((tag ("IPA_JF_KNOWN_TYPE"))) known_type;
     struct ipa_constant_data GTY ((tag ("IPA_JF_CONST"))) constant;
     struct ipa_pass_through_data GTY ((tag ("IPA_JF_PASS_THROUGH"))) pass_through;
@@ -199,6 +217,26 @@ struct GTY (()) ipa_jump_func
   } GTY ((desc ("%1.type"))) value;
 };
 
+/* Return whether the unknown jump function JFUNC has an associated valid index
+   into callers escaped_ref flags.  */
+
+static inline bool
+ipa_get_jf_unknown_esc_ref_valid (struct ipa_jump_func *jfunc)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_UNKNOWN);
+  return jfunc->value.unknown.escape_ref_valid;
+}
+
+/* Return the index into escaped ref flags of the caller that corresponds to
+   data described by an unknown jump function JFUNC.  */
+
+static inline int
+ipa_get_jf_unknown_esc_ref_index (struct ipa_jump_func *jfunc)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_UNKNOWN);
+  gcc_checking_assert (ipa_get_jf_unknown_esc_ref_valid (jfunc));
+  return jfunc->value.unknown.escape_ref_index;
+}
 
 /* Return the offset of the component that is described by a known type jump
    function JFUNC.  */
@@ -228,6 +266,27 @@ ipa_get_jf_known_type_component_type (st
   return jfunc->value.known_type.component_type;
 }
 
+/* Return whether the described known type jump functiion JFUNC has a valid
+   index into callers escaped_ref flags.  */
+
+static inline bool
+ipa_get_jf_known_type_esc_ref_valid (struct ipa_jump_func *jfunc)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_KNOWN_TYPE);
+  return jfunc->value.known_type.escape_ref_valid;
+}
+
+/* Return the index into escaped ref flags of the caller that corresponds to
+   data described by a known type jump function.  */
+
+static inline int
+ipa_get_jf_known_type_esc_ref_index (struct ipa_jump_func *jfunc)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_KNOWN_TYPE);
+  gcc_checking_assert (ipa_get_jf_known_type_esc_ref_valid (jfunc));
+  return jfunc->value.known_type.escape_ref_index;
+}
+
 /* Return the constant stored in a constant jump functin JFUNC.  */
 
 static inline tree
@@ -237,6 +296,8 @@ ipa_get_jf_constant (struct ipa_jump_fun
   return jfunc->value.constant.value;
 }
 
+/* Return the reference description stored in constant jump function JFUNC.  */
+
 static inline struct ipa_cst_ref_desc *
 ipa_get_jf_constant_rdesc (struct ipa_jump_func *jfunc)
 {
@@ -244,6 +305,27 @@ ipa_get_jf_constant_rdesc (struct ipa_ju
   return jfunc->value.constant.rdesc;
 }
 
+/* Return whether the described known type jump functiion JFUNC has a valid
+   index into callers escaped_ref flags.  */
+
+static inline bool
+ipa_get_jf_constant_esc_ref_valid (struct ipa_jump_func *jfunc)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_CONST);
+  return jfunc->value.constant.escape_ref_valid;
+}
+
+/* Return the index into escaped ref flags of the caller that corresponds to
+   data described by a known type jump function.  */
+
+static inline int
+ipa_get_jf_constant_esc_ref_index (struct ipa_jump_func *jfunc)
+{
+  gcc_checking_assert (jfunc->type == IPA_JF_CONST);
+  gcc_checking_assert (ipa_get_jf_constant_esc_ref_valid (jfunc));
+  return jfunc->value.constant.escape_ref_index;
+}
+
 /* Return the operand of a pass through jmp function JFUNC.  */
 
 static inline tree
@@ -346,13 +428,41 @@ struct ipa_param_descriptor
      says how many there are.  If any use could not be described by means of
      ipa-prop structures, this is IPA_UNDESCRIBED_USE.  */
   int controlled_uses;
-  unsigned int move_cost : 31;
+  unsigned int move_cost : 30;
   /* The parameter is used.  */
   unsigned used : 1;
 };
 
 struct ipcp_lattice;
 
+/* Interproceduaral information about references that we try to prove have not
+   escaped, among other properties.  We keep this information for all formal
+   parameters, even when they are not in fact references, so that indices into
+   param descriptors match those to reference descriptors, however we also keep
+   it for some other references that we pass as actual arguments to callees,
+   their indices must be derived from jump functions.
+
+   These flags hold results of intraprocedural summary gathering and
+   intermediate values during interprocedural propagation, as opposed to
+   corresponding bitmaps in cgraph_node which hold final results.  After
+   ipa_spread_escapes finishes, the corresponding bits in both structures are
+   the same, however ipa_ref_descriptor is freed at the end of the IPA
+   analysis stage.  */
+
+struct ipa_ref_descriptor
+{
+  /* Set if the reference could have escaped.  */
+  unsigned int escaped : 1;
+  /* Valid only if escaped is false.  Set when the memory the reference refers
+     to could have been written to in this function or in any of the
+     callees.  */
+  unsigned int clobbered : 1;
+  /* Valid only if escaped is false.  Set when the memory the reference refers
+     to could have been written to in any of the callees this function has
+     (i.e. disregarding any modifications in this particular function).  */
+  unsigned int callee_clobbered : 1;
+};
+
 /* ipa_node_params stores information related to formal parameters of functions
    and some other information for interprocedural passes that operate on
    parameters (such as ipa-cp).  */
@@ -362,6 +472,11 @@ struct ipa_node_params
   /* Information about individual formal parameters that are gathered when
      summaries are generated. */
   vec<ipa_param_descriptor> descriptors;
+
+  /* Escape and other information about formal parameters and also some
+     references passed as actual parameters to callees. */
+  vec<ipa_ref_descriptor> ref_descs;
+
   /* Pointer to an array of structures describing individual formal
      parameters.  */
   struct ipcp_param_lattices *lattices;
@@ -374,8 +489,12 @@ struct ipa_node_params
   /* Whether the param uses analysis and jump function computation has already
      been performed.  */
   unsigned analysis_done : 1;
-  /* Whether the function is enqueued in ipa-cp propagation stack.  */
+  /* Whether the function is enqueued in ipa-cp propagation stack or when
+     propagating escape flags "downwards" (i.e. from callers to callees).  */
   unsigned node_enqueued : 1;
+  /* Whether the function is enqueued in a to-do list of "upwards" escape flag
+     propagation (i.e. from callees to callers).  */
+  unsigned node_up_enqueued : 1;
   /* Whether we should create a specialized version based on values that are
      known to be constant in all contexts.  */
   unsigned do_clone_for_all_contexts : 1;
@@ -452,6 +571,45 @@ ipa_is_param_used (struct ipa_node_param
   return info->descriptors[i].used;
 }
 
+/* Return if reference number I (there are more of them than parameters, we
+   also have this information for some actual arguments passed to callees) of
+   the function associated with INFO has uncontrollably escaped.  */
+
+static inline bool
+ipa_is_ref_escaped (struct ipa_node_params *info, int i)
+{
+  return info->ref_descs[i].escaped;
+}
+
+/* Return if the reference number I tracked in function corresponding to INFO
+   is clobbered in any way during the run of the function.  */
+
+static inline bool
+ipa_is_ref_clobbered (struct ipa_node_params *info, int i)
+{
+  return info->ref_descs[i].clobbered;
+}
+
+/* Return if the reference number I tracked in function corresponding to INFO
+   is clobbered in any way during the run of the function.  */
+
+static inline bool
+ipa_is_ref_callee_clobbered (struct ipa_node_params *info, int i)
+{
+  return info->ref_descs[i].callee_clobbered;
+}
+
+/* Return true iff we know that the the Ith parameter of function described by
+   INFO does not escape and that it or any pointers derived from it are not
+   used as a base for a memory write in the node described by INFO and all its
+   (even indirect) callees.  */
+
+static inline bool
+ipa_is_param_ref_safely_constant (struct ipa_node_params *info, int i)
+{
+  return !ipa_is_ref_escaped (info, i) && !ipa_is_ref_clobbered (info, i);
+}
+
 /* Information about replacements done in aggregates for a given node (each
    node has its linked list).  */
 struct GTY(()) ipa_agg_replacement_value
@@ -589,6 +747,7 @@ tree ipa_intraprocedural_devirtualizatio
 
 /* Functions related to both.  */
 void ipa_analyze_node (struct cgraph_node *);
+void ipa_spread_escapes ();
 
 /* Aggregate jump function related functions.  */
 tree ipa_find_agg_cst_for_param (struct ipa_agg_jump_function *, HOST_WIDE_INT,
Index: src/gcc/ipa-cp.c
===================================================================
--- src.orig/gcc/ipa-cp.c
+++ src/gcc/ipa-cp.c
@@ -1311,12 +1311,17 @@ merge_aggregate_lattices (struct cgraph_
    rules about propagating values passed by reference.  */
 
 static bool
-agg_pass_through_permissible_p (struct ipcp_param_lattices *src_plats,
+agg_pass_through_permissible_p (struct ipa_node_params *caller_info,
+				struct ipcp_param_lattices *src_plats,
 				struct ipa_jump_func *jfunc)
 {
-  return src_plats->aggs
-    && (!src_plats->aggs_by_ref
-	|| ipa_get_jf_pass_through_agg_preserved (jfunc));
+  if (!src_plats->aggs)
+    return false;
+
+  return !src_plats->aggs_by_ref
+    || ipa_is_param_ref_safely_constant (caller_info,
+				 ipa_get_jf_pass_through_formal_id (jfunc))
+    || ipa_get_jf_pass_through_agg_preserved (jfunc);
 }
 
 /* Propagate scalar values across jump function JFUNC that is associated with
@@ -1340,7 +1345,7 @@ propagate_aggs_accross_jump_function (st
       struct ipcp_param_lattices *src_plats;
 
       src_plats = ipa_get_parm_lattices (caller_info, src_idx);
-      if (agg_pass_through_permissible_p (src_plats, jfunc))
+      if (agg_pass_through_permissible_p (caller_info, src_plats, jfunc))
 	{
 	  /* Currently we do not produce clobber aggregate jump
 	     functions, replace with merging when we do.  */
@@ -1351,15 +1356,16 @@ propagate_aggs_accross_jump_function (st
       else
 	ret |= set_agg_lats_contain_variable (dest_plats);
     }
-  else if (jfunc->type == IPA_JF_ANCESTOR
-	   && ipa_get_jf_ancestor_agg_preserved (jfunc))
+  else if (jfunc->type == IPA_JF_ANCESTOR)
     {
       struct ipa_node_params *caller_info = IPA_NODE_REF (cs->caller);
       int src_idx = ipa_get_jf_ancestor_formal_id (jfunc);
       struct ipcp_param_lattices *src_plats;
 
       src_plats = ipa_get_parm_lattices (caller_info, src_idx);
-      if (src_plats->aggs && src_plats->aggs_by_ref)
+      if (src_plats->aggs && src_plats->aggs_by_ref
+	  && (ipa_is_param_ref_safely_constant (caller_info, src_idx)
+	      || ipa_get_jf_ancestor_agg_preserved (jfunc)))
 	{
 	  /* Currently we do not produce clobber aggregate jump
 	     functions, replace with merging when we do.  */
@@ -1367,7 +1373,7 @@ propagate_aggs_accross_jump_function (st
 	  ret |= merge_aggregate_lattices (cs, dest_plats, src_plats, src_idx,
 					   ipa_get_jf_ancestor_offset (jfunc));
 	}
-      else if (!src_plats->aggs_by_ref)
+      else if (src_plats->aggs && !src_plats->aggs_by_ref)
 	ret |= set_agg_lats_to_bottom (dest_plats);
       else
 	ret |= set_agg_lats_contain_variable (dest_plats);
@@ -3037,39 +3043,49 @@ intersect_aggregates_with_edge (struct c
 	  struct ipcp_param_lattices *orig_plats;
 	  orig_plats = ipa_get_parm_lattices (IPA_NODE_REF (orig_node),
 					      src_idx);
-	  if (agg_pass_through_permissible_p (orig_plats, jfunc))
+	  if (!agg_pass_through_permissible_p (caller_info, orig_plats, jfunc))
 	    {
-	      if (!inter.exists ())
-		inter = agg_replacements_to_vector (cs->caller, src_idx, 0);
-	      else
-		intersect_with_agg_replacements (cs->caller, src_idx,
-						 &inter, 0);
+	      inter.release ();
+	      return vNULL;
 	    }
+	  if (!inter.exists ())
+	    inter = agg_replacements_to_vector (cs->caller, src_idx, 0);
+	  else
+	    intersect_with_agg_replacements (cs->caller, src_idx,
+					     &inter, 0);
 	}
       else
 	{
 	  struct ipcp_param_lattices *src_plats;
 	  src_plats = ipa_get_parm_lattices (caller_info, src_idx);
-	  if (agg_pass_through_permissible_p (src_plats, jfunc))
+	  if (!agg_pass_through_permissible_p (caller_info, src_plats, jfunc))
 	    {
-	      /* Currently we do not produce clobber aggregate jump
-		 functions, adjust when we do.  */
-	      gcc_checking_assert (!jfunc->agg.items);
-	      if (!inter.exists ())
-		inter = copy_plats_to_inter (src_plats, 0);
-	      else
-		intersect_with_plats (src_plats, &inter, 0);
+	      inter.release ();
+	      return vNULL;
 	    }
+	  /* Currently we do not produce clobber aggregate jump functions,
+	     adjust when we do.  */
+	  gcc_checking_assert (!jfunc->agg.items);
+	  if (!inter.exists ())
+	    inter = copy_plats_to_inter (src_plats, 0);
+	  else
+	    intersect_with_plats (src_plats, &inter, 0);
 	}
     }
-  else if (jfunc->type == IPA_JF_ANCESTOR
-	   && ipa_get_jf_ancestor_agg_preserved (jfunc))
+  else if (jfunc->type == IPA_JF_ANCESTOR)
     {
       struct ipa_node_params *caller_info = IPA_NODE_REF (cs->caller);
       int src_idx = ipa_get_jf_ancestor_formal_id (jfunc);
       struct ipcp_param_lattices *src_plats;
       HOST_WIDE_INT delta = ipa_get_jf_ancestor_offset (jfunc);
 
+      if (!ipa_is_param_ref_safely_constant (caller_info, src_idx)
+	  && !ipa_get_jf_ancestor_agg_preserved (jfunc))
+	{
+	  inter.release ();
+	  return vNULL;
+	}
+
       if (caller_info->ipcp_orig_node)
 	{
 	  if (!inter.exists ())
@@ -3115,9 +3131,8 @@ intersect_aggregates_with_edge (struct c
 		  break;
 		if (ti->offset == item->offset)
 		  {
-		    gcc_checking_assert (ti->value);
-		    if (values_equal_for_ipcp_p (item->value,
-						 ti->value))
+		    if (ti->value
+			&& values_equal_for_ipcp_p (item->value, ti->value))
 		      found = true;
 		    break;
 		  }
@@ -3686,6 +3701,9 @@ ipcp_driver (void)
 
   ipa_check_create_node_params ();
   ipa_check_create_edge_args ();
+
+  ipa_spread_escapes ();
+
   grow_edge_clone_vectors ();
   edge_duplication_hook_holder =
     cgraph_add_edge_duplication_hook (&ipcp_edge_duplication_hook, NULL);
Index: src/gcc/ipa-inline.c
===================================================================
--- src.orig/gcc/ipa-inline.c
+++ src/gcc/ipa-inline.c
@@ -2143,6 +2143,9 @@ ipa_inline (void)
   if (!optimize)
     return 0;
 
+  if (!flag_ipa_cp)
+    ipa_spread_escapes ();
+
   order = XCNEWVEC (struct cgraph_node *, cgraph_n_nodes);
 
   if (in_lto_p && optimize)
Index: src/gcc/testsuite/gcc.dg/ipa/ipcp-agg-10.c
===================================================================
--- /dev/null
+++ src/gcc/testsuite/gcc.dg/ipa/ipcp-agg-10.c
@@ -0,0 +1,35 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fno-ipa-sra -fdump-ipa-cp-details"  } */
+/* { dg-add-options bind_pic_locally } */
+
+volatile int g1, g2;
+
+static void __attribute__ ((noinline))
+bar (int *i)
+{
+  g1 = *i;
+}
+
+static void __attribute__ ((noinline))
+foo (int *i)
+{
+  bar (i);
+  bar (i);
+
+  g2 = *i;
+}
+
+int
+main (int argc, char **argv)
+{
+  int i = 8;
+
+  foo (&i);
+
+  return 0;
+}
+
+/* { dg-final { scan-ipa-dump "Creating a specialized node of foo.*for all known contexts" "cp" } } */
+/* { dg-final { scan-ipa-dump "Creating a specialized node of bar.*for all known contexts" "cp" } } */
+/* { dg-final { scan-ipa-dump-times "= 8" 6 "cp" } } */
+/* { dg-final { cleanup-ipa-dump "cp" } } */
Index: src/gcc/cgraph.c
===================================================================
--- src.orig/gcc/cgraph.c
+++ src/gcc/cgraph.c
@@ -3174,4 +3174,47 @@ gimple_check_call_matching_types (gimple
   return true;
 }
 
+/* Return true if parameter number I of NODE is marked as known not to
+   escape.  */
+
+bool
+cgraph_param_noescape_p (cgraph_node *node, int i)
+{
+  return node->global.noescape_parameters
+    && bitmap_bit_p (node->global.noescape_parameters, i);
+}
+
+/* Mark parameter of NODE numbeR I as known not to escape.  */
+
+void
+cgraph_set_param_noescape (cgraph_node *node, int i)
+{
+  if (!node->global.noescape_parameters)
+    node->global.noescape_parameters = BITMAP_GGC_ALLOC ();
+  bitmap_set_bit (node->global.noescape_parameters, i);
+}
+
+/* Return true if memory accessible through parameter number I of NODE is
+   marked as known not be clobbered.  */
+
+bool
+cgraph_param_noclobber_p (cgraph_node *node, int i)
+{
+  return node->global.noclobber_parameters
+    && bitmap_bit_p (node->global.noclobber_parameters, i);
+}
+
+/* Mark memory reachable by parameter number I of NODE as known not to be
+   clobbered.  */
+
+void
+cgraph_set_param_noclobber (cgraph_node *node, int i)
+{
+  if (!node->global.noclobber_parameters)
+    node->global.noclobber_parameters = BITMAP_GGC_ALLOC ();
+  bitmap_set_bit (node->global.noclobber_parameters, i);
+}
+
+
+
 #include "gt-cgraph.h"
Index: src/gcc/cgraph.h
===================================================================
--- src.orig/gcc/cgraph.h
+++ src/gcc/cgraph.h
@@ -227,6 +227,13 @@ struct GTY(()) cgraph_global_info {
   /* For inline clones this points to the function they will be
      inlined into.  */
   struct cgraph_node *inlined_to;
+
+  /* Parameters that are known not to escape from this function.  */
+  bitmap noescape_parameters;
+
+  /* Parameters for which the memory reached by the them is known not to be
+     clobbered.  */
+  bitmap noclobber_parameters;
 };
 
 /* Information about the function that is propagated by the RTL backend.
@@ -870,6 +877,11 @@ void cgraph_speculative_call_info (struc
 				   struct ipa_ref *&);
 extern bool gimple_check_call_matching_types (gimple, tree, bool);
 
+extern bool cgraph_param_noescape_p (cgraph_node *node, int i);
+extern void cgraph_set_param_noescape (cgraph_node *node, int i);
+extern bool cgraph_param_noclobber_p (cgraph_node *node, int i);
+extern void cgraph_set_param_noclobber (cgraph_node *node, int i);
+
 /* In cgraphunit.c  */
 struct asm_node *add_asm_node (tree);
 extern FILE *cgraph_dump_file;
Index: src/gcc/cgraphclones.c
===================================================================
--- src.orig/gcc/cgraphclones.c
+++ src/gcc/cgraphclones.c
@@ -338,6 +338,8 @@ duplicate_thunk_for_node (cgraph_node *t
   gcc_checking_assert (!DECL_INITIAL (new_decl));
   gcc_checking_assert (!DECL_RESULT (new_decl));
   gcc_checking_assert (!DECL_RTL_SET_P (new_decl));
+  gcc_checking_assert (!thunk->global.noescape_parameters
+		       && !thunk->global.noclobber_parameters);
 
   DECL_NAME (new_decl) = clone_function_name (thunk->decl, "artificial_thunk");
   SET_DECL_ASSEMBLER_NAME (new_decl, DECL_NAME (new_decl));
@@ -375,6 +377,26 @@ redirect_edge_duplicating_thunks (struct
   cgraph_redirect_edge_callee (e, n);
 }
 
+/* Copy global.noescape_parameters and global.noclobber_parameters of SRC to
+   DEST.  */
+
+static void
+copy_noescape_noclobber_bitmaps (cgraph_node *dst, cgraph_node *src)
+{
+  if (src->global.noescape_parameters)
+    {
+      dst->global.noescape_parameters = BITMAP_GGC_ALLOC ();
+      bitmap_copy (dst->global.noescape_parameters,
+		   src->global.noescape_parameters);
+    }
+  if (src->global.noclobber_parameters)
+    {
+      dst->global.noclobber_parameters = BITMAP_GGC_ALLOC ();
+      bitmap_copy (dst->global.noclobber_parameters,
+		   src->global.noclobber_parameters);
+    }
+}
+
 /* Create node representing clone of N executed COUNT times.  Decrease
    the execution counts from original node too.
    The new clone will have decl set to DECL that may or may not be the same
@@ -418,8 +440,8 @@ cgraph_clone_node (struct cgraph_node *n
   new_node->local = n->local;
   new_node->externally_visible = false;
   new_node->local.local = true;
-  new_node->global = n->global;
   new_node->global.inlined_to = new_inlined_to;
+  copy_noescape_noclobber_bitmaps (new_node, n);
   new_node->rtl = n->rtl;
   new_node->count = count;
   new_node->frequency = n->frequency;
@@ -883,7 +905,8 @@ cgraph_copy_node_for_versioning (struct
    new_version->local = old_version->local;
    new_version->externally_visible = false;
    new_version->local.local = new_version->definition;
-   new_version->global = old_version->global;
+   new_version->global.inlined_to = old_version->global.inlined_to;
+   copy_noescape_noclobber_bitmaps (new_version, old_version);
    new_version->rtl = old_version->rtl;
    new_version->count = old_version->count;
 
Index: src/gcc/lto-cgraph.c
===================================================================
--- src.orig/gcc/lto-cgraph.c
+++ src/gcc/lto-cgraph.c
@@ -1626,6 +1626,24 @@ output_edge_opt_summary (struct output_b
 {
 }
 
+/* Output a bitmap BMP.  Aimed primarily at bitmaps describing parameters in
+   cgraph_node.  */
+
+static void
+output_param_bitmap (struct output_block *ob, bitmap bmp)
+{
+  if (bmp)
+    {
+      unsigned int index;
+      bitmap_iterator bi;
+      streamer_write_uhwi (ob, bitmap_count_bits (bmp));
+      EXECUTE_IF_SET_IN_BITMAP (bmp, 0, index, bi)
+	streamer_write_uhwi (ob, index);
+    }
+  else
+    streamer_write_uhwi (ob, 0);
+}
+
 /* Output optimization summary for NODE to OB.  */
 
 static void
@@ -1633,29 +1651,15 @@ output_node_opt_summary (struct output_b
 			 struct cgraph_node *node,
 			 lto_symtab_encoder_t encoder)
 {
-  unsigned int index;
-  bitmap_iterator bi;
   struct ipa_replace_map *map;
   struct bitpack_d bp;
   int i;
   struct cgraph_edge *e;
 
-  if (node->clone.args_to_skip)
-    {
-      streamer_write_uhwi (ob, bitmap_count_bits (node->clone.args_to_skip));
-      EXECUTE_IF_SET_IN_BITMAP (node->clone.args_to_skip, 0, index, bi)
-	streamer_write_uhwi (ob, index);
-    }
-  else
-    streamer_write_uhwi (ob, 0);
-  if (node->clone.combined_args_to_skip)
-    {
-      streamer_write_uhwi (ob, bitmap_count_bits (node->clone.combined_args_to_skip));
-      EXECUTE_IF_SET_IN_BITMAP (node->clone.combined_args_to_skip, 0, index, bi)
-	streamer_write_uhwi (ob, index);
-    }
-  else
-    streamer_write_uhwi (ob, 0);
+  output_param_bitmap (ob, node->clone.args_to_skip);
+  output_param_bitmap (ob, node->clone.combined_args_to_skip);
+  output_param_bitmap (ob, node->global.noescape_parameters);
+  output_param_bitmap (ob, node->global.noclobber_parameters);
   streamer_write_uhwi (ob, vec_safe_length (node->clone.tree_map));
   FOR_EACH_VEC_SAFE_ELT (node->clone.tree_map, i, map)
     {
@@ -1724,6 +1728,25 @@ input_edge_opt_summary (struct cgraph_ed
 {
 }
 
+/* Input and return a bitmap that was output by output_param_bitmap. */
+
+static bitmap
+input_param_bitmap (struct lto_input_block *ib_main)
+{
+  int count;
+
+  count = streamer_read_uhwi (ib_main);
+  if (!count)
+    return NULL;
+  bitmap res = BITMAP_GGC_ALLOC ();
+  for (int i = 0; i < count; i++)
+    {
+      int bit = streamer_read_uhwi (ib_main);
+      bitmap_set_bit (res, bit);
+    }
+  return res;
+}
+
 /* Input optimisation summary of NODE.  */
 
 static void
@@ -1731,28 +1754,14 @@ input_node_opt_summary (struct cgraph_no
 			struct lto_input_block *ib_main,
 			struct data_in *data_in)
 {
-  int i;
   int count;
-  int bit;
+  int i;
   struct bitpack_d bp;
   struct cgraph_edge *e;
-
-  count = streamer_read_uhwi (ib_main);
-  if (count)
-    node->clone.args_to_skip = BITMAP_GGC_ALLOC ();
-  for (i = 0; i < count; i++)
-    {
-      bit = streamer_read_uhwi (ib_main);
-      bitmap_set_bit (node->clone.args_to_skip, bit);
-    }
-  count = streamer_read_uhwi (ib_main);
-  if (count)
-    node->clone.combined_args_to_skip = BITMAP_GGC_ALLOC ();
-  for (i = 0; i < count; i++)
-    {
-      bit = streamer_read_uhwi (ib_main);
-      bitmap_set_bit (node->clone.combined_args_to_skip, bit);
-    }
+  node->clone.args_to_skip = input_param_bitmap (ib_main);
+  node->clone.combined_args_to_skip = input_param_bitmap (ib_main);
+  node->global.noescape_parameters = input_param_bitmap (ib_main);
+  node->global.noclobber_parameters = input_param_bitmap (ib_main);
   count = streamer_read_uhwi (ib_main);
   for (i = 0; i < count; i++)
     {
Index: src/gcc/tree-inline.c
===================================================================
--- src.orig/gcc/tree-inline.c
+++ src/gcc/tree-inline.c
@@ -5248,6 +5248,55 @@ update_clone_info (copy_body_data * id)
     }
 }
 
+/* Update global.noescape_parameters and global.noclobber_parameters of NODE to
+   reflect parameters about to be skipped as indicated in ARGS_TO_SKIP.
+   ORIG_PARM is the chain of parameters of the original node.  */
+
+void
+update_noescape_noclobber_bitmaps (cgraph_node *node, tree orig_parm,
+				   bitmap args_to_skip)
+{
+  if (!args_to_skip || bitmap_empty_p (args_to_skip)
+      || !orig_parm
+      || (!node->global.noescape_parameters
+	  && !node->global.noclobber_parameters))
+    return;
+
+  int count = 0;
+  while (orig_parm)
+    {
+      count++;
+      orig_parm = DECL_CHAIN (orig_parm);
+    }
+
+  bitmap new_noescape = NULL;
+  bitmap new_noclobber = NULL;
+
+  int ni = 0;
+  for (int i = 0; i < count; i++)
+    if (!bitmap_bit_p (args_to_skip, i))
+      {
+	if (node->global.noescape_parameters
+	    && bitmap_bit_p (node->global.noescape_parameters, i))
+	  {
+	    if (!new_noescape)
+	      new_noescape = BITMAP_GGC_ALLOC ();
+	    bitmap_set_bit (new_noescape, ni);
+	  }
+	if (node->global.noclobber_parameters
+	    && bitmap_bit_p (node->global.noclobber_parameters, i))
+	  {
+	    if (!new_noclobber)
+	      new_noclobber = BITMAP_GGC_ALLOC ();
+	    bitmap_set_bit (new_noclobber, ni);
+	  }
+	ni++;
+      }
+  node->global.noescape_parameters = new_noescape;
+  node->global.noclobber_parameters = new_noclobber;
+}
+
+
 /* Create a copy of a function's tree.
    OLD_DECL and NEW_DECL are FUNCTION_DECL tree nodes
    of the original function and the new copied function
@@ -5405,6 +5454,8 @@ tree_function_versioning (tree old_decl,
 	      }
 	  }
       }
+  update_noescape_noclobber_bitmaps (new_version_node,
+				     DECL_ARGUMENTS (old_decl), args_to_skip);
   /* Copy the function's arguments.  */
   if (DECL_ARGUMENTS (old_decl) != NULL_TREE)
     DECL_ARGUMENTS (new_decl) =


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]