]>
Commit | Line | Data |
---|---|---|
518dc859 | 1 | /* Interprocedural constant propagation |
8d9254fc | 2 | Copyright (C) 2005-2020 Free Software Foundation, Inc. |
310bc633 MJ |
3 | |
4 | Contributed by Razya Ladelsky <RAZYA@il.ibm.com> and Martin Jambor | |
5 | <mjambor@suse.cz> | |
b8698a0f | 6 | |
518dc859 | 7 | This file is part of GCC. |
b8698a0f | 8 | |
518dc859 RL |
9 | GCC is free software; you can redistribute it and/or modify it under |
10 | the terms of the GNU General Public License as published by the Free | |
9dcd6f09 | 11 | Software Foundation; either version 3, or (at your option) any later |
518dc859 | 12 | version. |
b8698a0f | 13 | |
518dc859 RL |
14 | GCC is distributed in the hope that it will be useful, but WITHOUT ANY |
15 | WARRANTY; without even the implied warranty of MERCHANTABILITY or | |
16 | FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License | |
17 | for more details. | |
b8698a0f | 18 | |
518dc859 | 19 | You should have received a copy of the GNU General Public License |
9dcd6f09 NC |
20 | along with GCC; see the file COPYING3. If not see |
21 | <http://www.gnu.org/licenses/>. */ | |
518dc859 | 22 | |
310bc633 | 23 | /* Interprocedural constant propagation (IPA-CP). |
b8698a0f | 24 | |
310bc633 | 25 | The goal of this transformation is to |
c43f07af | 26 | |
310bc633 MJ |
27 | 1) discover functions which are always invoked with some arguments with the |
28 | same known constant values and modify the functions so that the | |
29 | subsequent optimizations can take advantage of the knowledge, and | |
c43f07af | 30 | |
310bc633 MJ |
31 | 2) partial specialization - create specialized versions of functions |
32 | transformed in this way if some parameters are known constants only in | |
33 | certain contexts but the estimated tradeoff between speedup and cost size | |
34 | is deemed good. | |
b8698a0f | 35 | |
310bc633 MJ |
36 | The algorithm also propagates types and attempts to perform type based |
37 | devirtualization. Types are propagated much like constants. | |
b8698a0f | 38 | |
310bc633 MJ |
39 | The algorithm basically consists of three stages. In the first, functions |
40 | are analyzed one at a time and jump functions are constructed for all known | |
41 | call-sites. In the second phase, the pass propagates information from the | |
42 | jump functions across the call to reveal what values are available at what | |
43 | call sites, performs estimations of effects of known values on functions and | |
44 | their callees, and finally decides what specialized extra versions should be | |
45 | created. In the third, the special versions materialize and appropriate | |
46 | calls are redirected. | |
c43f07af | 47 | |
310bc633 MJ |
48 | The algorithm used is to a certain extent based on "Interprocedural Constant |
49 | Propagation", by David Callahan, Keith D Cooper, Ken Kennedy, Linda Torczon, | |
50 | Comp86, pg 152-161 and "A Methodology for Procedure Cloning" by Keith D | |
51 | Cooper, Mary W. Hall, and Ken Kennedy. | |
b8698a0f | 52 | |
518dc859 RL |
53 | |
54 | First stage - intraprocedural analysis | |
55 | ======================================= | |
310bc633 | 56 | |
c43f07af | 57 | This phase computes jump_function and modification flags. |
b8698a0f | 58 | |
310bc633 MJ |
59 | A jump function for a call-site represents the values passed as an actual |
60 | arguments of a given call-site. In principle, there are three types of | |
61 | values: | |
62 | ||
63 | Pass through - the caller's formal parameter is passed as an actual | |
155c9907 | 64 | argument, plus an operation on it can be performed. |
ea2c620c | 65 | Constant - a constant is passed as an actual argument. |
518dc859 | 66 | Unknown - neither of the above. |
b8698a0f | 67 | |
310bc633 MJ |
68 | All jump function types are described in detail in ipa-prop.h, together with |
69 | the data structures that represent them and methods of accessing them. | |
b8698a0f | 70 | |
310bc633 | 71 | ipcp_generate_summary() is the main function of the first stage. |
518dc859 RL |
72 | |
73 | Second stage - interprocedural analysis | |
74 | ======================================== | |
b8698a0f | 75 | |
310bc633 MJ |
76 | This stage is itself divided into two phases. In the first, we propagate |
77 | known values over the call graph, in the second, we make cloning decisions. | |
78 | It uses a different algorithm than the original Callahan's paper. | |
b8698a0f | 79 | |
310bc633 MJ |
80 | First, we traverse the functions topologically from callers to callees and, |
81 | for each strongly connected component (SCC), we propagate constants | |
82 | according to previously computed jump functions. We also record what known | |
83 | values depend on other known values and estimate local effects. Finally, we | |
073a8998 | 84 | propagate cumulative information about these effects from dependent values |
310bc633 | 85 | to those on which they depend. |
518dc859 | 86 | |
310bc633 MJ |
87 | Second, we again traverse the call graph in the same topological order and |
88 | make clones for functions which we know are called with the same values in | |
89 | all contexts and decide about extra specialized clones of functions just for | |
90 | some contexts - these decisions are based on both local estimates and | |
91 | cumulative estimates propagated from callees. | |
518dc859 | 92 | |
310bc633 MJ |
93 | ipcp_propagate_stage() and ipcp_decision_stage() together constitute the |
94 | third stage. | |
95 | ||
96 | Third phase - materialization of clones, call statement updates. | |
518dc859 | 97 | ============================================ |
310bc633 MJ |
98 | |
99 | This stage is currently performed by call graph code (mainly in cgraphunit.c | |
100 | and tree-inline.c) according to instructions inserted to the call graph by | |
101 | the second stage. */ | |
518dc859 RL |
102 | |
103 | #include "config.h" | |
104 | #include "system.h" | |
105 | #include "coretypes.h" | |
957060b5 | 106 | #include "backend.h" |
518dc859 | 107 | #include "tree.h" |
2fb9a547 | 108 | #include "gimple-expr.h" |
9fdcd34e | 109 | #include "predict.h" |
c582198b | 110 | #include "alloc-pool.h" |
957060b5 AM |
111 | #include "tree-pass.h" |
112 | #include "cgraph.h" | |
113 | #include "diagnostic.h" | |
957060b5 AM |
114 | #include "fold-const.h" |
115 | #include "gimple-fold.h" | |
dd912cb8 | 116 | #include "symbol-summary.h" |
8bc5448f | 117 | #include "tree-vrp.h" |
518dc859 | 118 | #include "ipa-prop.h" |
cf835838 | 119 | #include "tree-pretty-print.h" |
3cc1cccc | 120 | #include "tree-inline.h" |
27d020cf | 121 | #include "ipa-fnsummary.h" |
310bc633 | 122 | #include "ipa-utils.h" |
209ca542 | 123 | #include "tree-ssa-ccp.h" |
314e6352 ML |
124 | #include "stringpool.h" |
125 | #include "attribs.h" | |
518dc859 | 126 | |
c0cb5055 | 127 | template <typename valtype> class ipcp_value; |
ca30a539 | 128 | |
310bc633 | 129 | /* Describes a particular source for an IPA-CP value. */ |
ca30a539 | 130 | |
c0cb5055 | 131 | template <typename valtype> |
6c1dae73 | 132 | struct ipcp_value_source |
310bc633 | 133 | { |
c0cb5055 | 134 | public: |
2c9561b5 MJ |
135 | /* Aggregate offset of the source, negative if the source is scalar value of |
136 | the argument itself. */ | |
137 | HOST_WIDE_INT offset; | |
310bc633 | 138 | /* The incoming edge that brought the value. */ |
c0cb5055 | 139 | cgraph_edge *cs; |
310bc633 MJ |
140 | /* If the jump function that resulted into his value was a pass-through or an |
141 | ancestor, this is the ipcp_value of the caller from which the described | |
142 | value has been derived. Otherwise it is NULL. */ | |
c0cb5055 | 143 | ipcp_value<valtype> *val; |
310bc633 | 144 | /* Next pointer in a linked list of sources of a value. */ |
c0cb5055 | 145 | ipcp_value_source *next; |
310bc633 MJ |
146 | /* If the jump function that resulted into his value was a pass-through or an |
147 | ancestor, this is the index of the parameter of the caller the jump | |
148 | function references. */ | |
149 | int index; | |
150 | }; | |
ca30a539 | 151 | |
c0cb5055 MJ |
152 | /* Common ancestor for all ipcp_value instantiations. */ |
153 | ||
154 | class ipcp_value_base | |
155 | { | |
156 | public: | |
157 | /* Time benefit and size cost that specializing the function for this value | |
158 | would bring about in this function alone. */ | |
159 | int local_time_benefit, local_size_cost; | |
160 | /* Time benefit and size cost that specializing the function for this value | |
161 | can bring about in it's callees (transitively). */ | |
162 | int prop_time_benefit, prop_size_cost; | |
c8fb20d8 YG |
163 | |
164 | ipcp_value_base () | |
165 | : local_time_benefit (0), local_size_cost (0), | |
166 | prop_time_benefit (0), prop_size_cost (0) {} | |
c0cb5055 MJ |
167 | }; |
168 | ||
310bc633 | 169 | /* Describes one particular value stored in struct ipcp_lattice. */ |
ca30a539 | 170 | |
c0cb5055 MJ |
171 | template <typename valtype> |
172 | class ipcp_value : public ipcp_value_base | |
518dc859 | 173 | { |
c0cb5055 MJ |
174 | public: |
175 | /* The actual value for the given parameter. */ | |
176 | valtype value; | |
310bc633 | 177 | /* The list of sources from which this value originates. */ |
c0cb5055 | 178 | ipcp_value_source <valtype> *sources; |
310bc633 | 179 | /* Next pointers in a linked list of all values in a lattice. */ |
c0cb5055 | 180 | ipcp_value *next; |
310bc633 MJ |
181 | /* Next pointers in a linked list of values in a strongly connected component |
182 | of values. */ | |
c0cb5055 | 183 | ipcp_value *scc_next; |
310bc633 MJ |
184 | /* Next pointers in a linked list of SCCs of values sorted topologically |
185 | according their sources. */ | |
c0cb5055 | 186 | ipcp_value *topo_next; |
310bc633 MJ |
187 | /* A specialized node created for this value, NULL if none has been (so far) |
188 | created. */ | |
c0cb5055 | 189 | cgraph_node *spec_node; |
310bc633 MJ |
190 | /* Depth first search number and low link for topological sorting of |
191 | values. */ | |
192 | int dfs, low_link; | |
f25ae20e | 193 | /* True if this value is currently on the topo-sort stack. */ |
310bc633 | 194 | bool on_stack; |
c0cb5055 | 195 | |
c8fb20d8 YG |
196 | ipcp_value() |
197 | : sources (0), next (0), scc_next (0), topo_next (0), | |
198 | spec_node (0), dfs (0), low_link (0), on_stack (false) {} | |
199 | ||
c0cb5055 MJ |
200 | void add_source (cgraph_edge *cs, ipcp_value *src_val, int src_idx, |
201 | HOST_WIDE_INT offset); | |
310bc633 | 202 | }; |
518dc859 | 203 | |
2c9561b5 | 204 | /* Lattice describing potential values of a formal parameter of a function, or |
5764ee3c | 205 | a part of an aggregate. TOP is represented by a lattice with zero values |
2c9561b5 MJ |
206 | and with contains_variable and bottom flags cleared. BOTTOM is represented |
207 | by a lattice with the bottom flag set. In that case, values and | |
310bc633 MJ |
208 | contains_variable flag should be disregarded. */ |
209 | ||
c0cb5055 | 210 | template <typename valtype> |
6c1dae73 | 211 | struct ipcp_lattice |
518dc859 | 212 | { |
c0cb5055 | 213 | public: |
310bc633 MJ |
214 | /* The list of known values and types in this lattice. Note that values are |
215 | not deallocated if a lattice is set to bottom because there may be value | |
216 | sources referencing them. */ | |
c0cb5055 | 217 | ipcp_value<valtype> *values; |
310bc633 MJ |
218 | /* Number of known values and types in this lattice. */ |
219 | int values_count; | |
2c9561b5 | 220 | /* The lattice contains a variable component (in addition to values). */ |
310bc633 MJ |
221 | bool contains_variable; |
222 | /* The value of the lattice is bottom (i.e. variable and unusable for any | |
223 | propagation). */ | |
224 | bool bottom; | |
c0cb5055 MJ |
225 | |
226 | inline bool is_single_const (); | |
227 | inline bool set_to_bottom (); | |
228 | inline bool set_contains_variable (); | |
229 | bool add_value (valtype newval, cgraph_edge *cs, | |
230 | ipcp_value<valtype> *src_val = NULL, | |
9b14fc33 FX |
231 | int src_idx = 0, HOST_WIDE_INT offset = -1, |
232 | ipcp_value<valtype> **val_p = NULL, | |
233 | bool unlimited = false); | |
c0cb5055 | 234 | void print (FILE * f, bool dump_sources, bool dump_benefits); |
2c9561b5 MJ |
235 | }; |
236 | ||
c0cb5055 MJ |
237 | /* Lattice of tree values with an offset to describe a part of an |
238 | aggregate. */ | |
2c9561b5 | 239 | |
6c1dae73 | 240 | struct ipcp_agg_lattice : public ipcp_lattice<tree> |
2c9561b5 | 241 | { |
c0cb5055 | 242 | public: |
2c9561b5 MJ |
243 | /* Offset that is being described by this lattice. */ |
244 | HOST_WIDE_INT offset; | |
245 | /* Size so that we don't have to re-compute it every time we traverse the | |
246 | list. Must correspond to TYPE_SIZE of all lat values. */ | |
247 | HOST_WIDE_INT size; | |
248 | /* Next element of the linked list. */ | |
249 | struct ipcp_agg_lattice *next; | |
250 | }; | |
251 | ||
209ca542 PK |
252 | /* Lattice of known bits, only capable of holding one value. |
253 | Bitwise constant propagation propagates which bits of a | |
254 | value are constant. | |
255 | For eg: | |
256 | int f(int x) | |
257 | { | |
258 | return some_op (x); | |
259 | } | |
260 | ||
261 | int f1(int y) | |
262 | { | |
263 | if (cond) | |
264 | return f (y & 0xff); | |
265 | else | |
266 | return f (y & 0xf); | |
267 | } | |
268 | ||
269 | In the above case, the param 'x' will always have all | |
270 | the bits (except the bits in lsb) set to 0. | |
271 | Hence the mask of 'x' would be 0xff. The mask | |
272 | reflects that the bits in lsb are unknown. | |
273 | The actual propagated value is given by m_value & ~m_mask. */ | |
274 | ||
275 | class ipcp_bits_lattice | |
276 | { | |
277 | public: | |
278 | bool bottom_p () { return m_lattice_val == IPA_BITS_VARYING; } | |
279 | bool top_p () { return m_lattice_val == IPA_BITS_UNDEFINED; } | |
280 | bool constant_p () { return m_lattice_val == IPA_BITS_CONSTANT; } | |
281 | bool set_to_bottom (); | |
155c9907 JJ |
282 | bool set_to_constant (widest_int, widest_int); |
283 | ||
209ca542 PK |
284 | widest_int get_value () { return m_value; } |
285 | widest_int get_mask () { return m_mask; } | |
286 | ||
287 | bool meet_with (ipcp_bits_lattice& other, unsigned, signop, | |
288 | enum tree_code, tree); | |
289 | ||
290 | bool meet_with (widest_int, widest_int, unsigned); | |
291 | ||
292 | void print (FILE *); | |
293 | ||
294 | private: | |
295 | enum { IPA_BITS_UNDEFINED, IPA_BITS_CONSTANT, IPA_BITS_VARYING } m_lattice_val; | |
296 | ||
297 | /* Similar to ccp_lattice_t, mask represents which bits of value are constant. | |
298 | If a bit in mask is set to 0, then the corresponding bit in | |
299 | value is known to be constant. */ | |
300 | widest_int m_value, m_mask; | |
301 | ||
155c9907 | 302 | bool meet_with_1 (widest_int, widest_int, unsigned); |
209ca542 | 303 | void get_value_and_mask (tree, widest_int *, widest_int *); |
155c9907 | 304 | }; |
209ca542 | 305 | |
8bc5448f KV |
306 | /* Lattice of value ranges. */ |
307 | ||
308 | class ipcp_vr_lattice | |
309 | { | |
310 | public: | |
028d81b1 | 311 | value_range m_vr; |
8bc5448f KV |
312 | |
313 | inline bool bottom_p () const; | |
314 | inline bool top_p () const; | |
315 | inline bool set_to_bottom (); | |
028d81b1 | 316 | bool meet_with (const value_range *p_vr); |
8bc5448f | 317 | bool meet_with (const ipcp_vr_lattice &other); |
54994253 | 318 | void init () { gcc_assert (m_vr.undefined_p ()); } |
8bc5448f KV |
319 | void print (FILE * f); |
320 | ||
321 | private: | |
028d81b1 | 322 | bool meet_with_1 (const value_range *other_vr); |
8bc5448f KV |
323 | }; |
324 | ||
2c9561b5 MJ |
325 | /* Structure containing lattices for a parameter itself and for pieces of |
326 | aggregates that are passed in the parameter or by a reference in a parameter | |
327 | plus some other useful flags. */ | |
328 | ||
c0cb5055 | 329 | class ipcp_param_lattices |
2c9561b5 | 330 | { |
c0cb5055 | 331 | public: |
2c9561b5 | 332 | /* Lattice describing the value of the parameter itself. */ |
c0cb5055 | 333 | ipcp_lattice<tree> itself; |
026c3cfd | 334 | /* Lattice describing the polymorphic contexts of a parameter. */ |
44210a96 | 335 | ipcp_lattice<ipa_polymorphic_call_context> ctxlat; |
2c9561b5 | 336 | /* Lattices describing aggregate parts. */ |
c0cb5055 | 337 | ipcp_agg_lattice *aggs; |
209ca542 PK |
338 | /* Lattice describing known bits. */ |
339 | ipcp_bits_lattice bits_lattice; | |
8bc5448f KV |
340 | /* Lattice describing value range. */ |
341 | ipcp_vr_lattice m_value_range; | |
2c9561b5 MJ |
342 | /* Number of aggregate lattices */ |
343 | int aggs_count; | |
344 | /* True if aggregate data were passed by reference (as opposed to by | |
345 | value). */ | |
346 | bool aggs_by_ref; | |
347 | /* All aggregate lattices contain a variable component (in addition to | |
348 | values). */ | |
349 | bool aggs_contain_variable; | |
350 | /* The value of all aggregate lattices is bottom (i.e. variable and unusable | |
351 | for any propagation). */ | |
352 | bool aggs_bottom; | |
353 | ||
310bc633 MJ |
354 | /* There is a virtual call based on this parameter. */ |
355 | bool virt_call; | |
356 | }; | |
518dc859 | 357 | |
2c9561b5 MJ |
358 | /* Allocation pools for values and their sources in ipa-cp. */ |
359 | ||
fb0b2914 | 360 | object_allocator<ipcp_value<tree> > ipcp_cst_values_pool |
fcb87c50 | 361 | ("IPA-CP constant values"); |
2651e637 | 362 | |
fb0b2914 | 363 | object_allocator<ipcp_value<ipa_polymorphic_call_context> > |
fcb87c50 | 364 | ipcp_poly_ctx_values_pool ("IPA-CP polymorphic contexts"); |
2651e637 | 365 | |
fb0b2914 | 366 | object_allocator<ipcp_value_source<tree> > ipcp_sources_pool |
fcb87c50 | 367 | ("IPA-CP value sources"); |
2651e637 | 368 | |
fb0b2914 | 369 | object_allocator<ipcp_agg_lattice> ipcp_agg_lattice_pool |
fcb87c50 | 370 | ("IPA_CP aggregate lattices"); |
2c9561b5 | 371 | |
310bc633 MJ |
372 | /* Maximal count found in program. */ |
373 | ||
3995f3a2 | 374 | static profile_count max_count; |
310bc633 MJ |
375 | |
376 | /* Original overall size of the program. */ | |
377 | ||
378 | static long overall_size, max_new_size; | |
379 | ||
9e0b0ec3 MP |
380 | /* Node name to unique clone suffix number map. */ |
381 | static hash_map<const char *, unsigned> *clone_num_suffixes; | |
53aedcce | 382 | |
2c9561b5 MJ |
383 | /* Return the param lattices structure corresponding to the Ith formal |
384 | parameter of the function described by INFO. */ | |
99b1c316 MS |
385 | static inline class ipcp_param_lattices * |
386 | ipa_get_parm_lattices (class ipa_node_params *info, int i) | |
518dc859 | 387 | { |
d7da5cc8 | 388 | gcc_assert (i >= 0 && i < ipa_get_param_count (info)); |
310bc633 MJ |
389 | gcc_checking_assert (!info->ipcp_orig_node); |
390 | gcc_checking_assert (info->lattices); | |
391 | return &(info->lattices[i]); | |
518dc859 RL |
392 | } |
393 | ||
2c9561b5 MJ |
394 | /* Return the lattice corresponding to the scalar value of the Ith formal |
395 | parameter of the function described by INFO. */ | |
c0cb5055 | 396 | static inline ipcp_lattice<tree> * |
99b1c316 | 397 | ipa_get_scalar_lat (class ipa_node_params *info, int i) |
2c9561b5 | 398 | { |
99b1c316 | 399 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
2c9561b5 MJ |
400 | return &plats->itself; |
401 | } | |
402 | ||
44210a96 MJ |
403 | /* Return the lattice corresponding to the scalar value of the Ith formal |
404 | parameter of the function described by INFO. */ | |
405 | static inline ipcp_lattice<ipa_polymorphic_call_context> * | |
99b1c316 | 406 | ipa_get_poly_ctx_lat (class ipa_node_params *info, int i) |
44210a96 | 407 | { |
99b1c316 | 408 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
44210a96 MJ |
409 | return &plats->ctxlat; |
410 | } | |
411 | ||
310bc633 MJ |
412 | /* Return whether LAT is a lattice with a single constant and without an |
413 | undefined value. */ | |
414 | ||
c0cb5055 MJ |
415 | template <typename valtype> |
416 | inline bool | |
417 | ipcp_lattice<valtype>::is_single_const () | |
518dc859 | 418 | { |
c0cb5055 | 419 | if (bottom || contains_variable || values_count != 1) |
518dc859 | 420 | return false; |
310bc633 MJ |
421 | else |
422 | return true; | |
518dc859 RL |
423 | } |
424 | ||
310bc633 MJ |
425 | /* Print V which is extracted from a value in a lattice to F. */ |
426 | ||
518dc859 | 427 | static void |
310bc633 | 428 | print_ipcp_constant_value (FILE * f, tree v) |
518dc859 | 429 | { |
3b97a5c7 | 430 | if (TREE_CODE (v) == ADDR_EXPR |
155c9907 | 431 | && TREE_CODE (TREE_OPERAND (v, 0)) == CONST_DECL) |
518dc859 | 432 | { |
310bc633 | 433 | fprintf (f, "& "); |
ef6cb4c7 | 434 | print_generic_expr (f, DECL_INITIAL (TREE_OPERAND (v, 0))); |
518dc859 | 435 | } |
310bc633 | 436 | else |
ef6cb4c7 | 437 | print_generic_expr (f, v); |
518dc859 RL |
438 | } |
439 | ||
44210a96 MJ |
440 | /* Print V which is extracted from a value in a lattice to F. */ |
441 | ||
442 | static void | |
443 | print_ipcp_constant_value (FILE * f, ipa_polymorphic_call_context v) | |
444 | { | |
445 | v.dump(f, false); | |
446 | } | |
447 | ||
2c9561b5 MJ |
448 | /* Print a lattice LAT to F. */ |
449 | ||
c0cb5055 MJ |
450 | template <typename valtype> |
451 | void | |
452 | ipcp_lattice<valtype>::print (FILE * f, bool dump_sources, bool dump_benefits) | |
2c9561b5 | 453 | { |
c0cb5055 | 454 | ipcp_value<valtype> *val; |
2c9561b5 MJ |
455 | bool prev = false; |
456 | ||
c0cb5055 | 457 | if (bottom) |
2c9561b5 MJ |
458 | { |
459 | fprintf (f, "BOTTOM\n"); | |
460 | return; | |
461 | } | |
462 | ||
c0cb5055 | 463 | if (!values_count && !contains_variable) |
2c9561b5 MJ |
464 | { |
465 | fprintf (f, "TOP\n"); | |
466 | return; | |
467 | } | |
468 | ||
c0cb5055 | 469 | if (contains_variable) |
2c9561b5 MJ |
470 | { |
471 | fprintf (f, "VARIABLE"); | |
472 | prev = true; | |
473 | if (dump_benefits) | |
474 | fprintf (f, "\n"); | |
475 | } | |
476 | ||
c0cb5055 | 477 | for (val = values; val; val = val->next) |
2c9561b5 MJ |
478 | { |
479 | if (dump_benefits && prev) | |
480 | fprintf (f, " "); | |
481 | else if (!dump_benefits && prev) | |
482 | fprintf (f, ", "); | |
483 | else | |
484 | prev = true; | |
485 | ||
486 | print_ipcp_constant_value (f, val->value); | |
487 | ||
488 | if (dump_sources) | |
489 | { | |
c0cb5055 | 490 | ipcp_value_source<valtype> *s; |
2c9561b5 MJ |
491 | |
492 | fprintf (f, " [from:"); | |
493 | for (s = val->sources; s; s = s->next) | |
e3951b03 JH |
494 | fprintf (f, " %i(%f)", s->cs->caller->order, |
495 | s->cs->sreal_frequency ().to_double ()); | |
2c9561b5 MJ |
496 | fprintf (f, "]"); |
497 | } | |
498 | ||
499 | if (dump_benefits) | |
500 | fprintf (f, " [loc_time: %i, loc_size: %i, " | |
501 | "prop_time: %i, prop_size: %i]\n", | |
502 | val->local_time_benefit, val->local_size_cost, | |
503 | val->prop_time_benefit, val->prop_size_cost); | |
504 | } | |
505 | if (!dump_benefits) | |
506 | fprintf (f, "\n"); | |
507 | } | |
508 | ||
209ca542 PK |
509 | void |
510 | ipcp_bits_lattice::print (FILE *f) | |
511 | { | |
512 | if (top_p ()) | |
513 | fprintf (f, " Bits unknown (TOP)\n"); | |
514 | else if (bottom_p ()) | |
515 | fprintf (f, " Bits unusable (BOTTOM)\n"); | |
516 | else | |
517 | { | |
518 | fprintf (f, " Bits: value = "); print_hex (get_value (), f); | |
519 | fprintf (f, ", mask = "); print_hex (get_mask (), f); | |
520 | fprintf (f, "\n"); | |
521 | } | |
522 | } | |
523 | ||
8bc5448f KV |
524 | /* Print value range lattice to F. */ |
525 | ||
526 | void | |
527 | ipcp_vr_lattice::print (FILE * f) | |
528 | { | |
c49eeac3 | 529 | dump_value_range (f, &m_vr); |
8bc5448f KV |
530 | } |
531 | ||
c43f07af | 532 | /* Print all ipcp_lattices of all functions to F. */ |
310bc633 | 533 | |
518dc859 | 534 | static void |
310bc633 | 535 | print_all_lattices (FILE * f, bool dump_sources, bool dump_benefits) |
518dc859 RL |
536 | { |
537 | struct cgraph_node *node; | |
538 | int i, count; | |
3cc1cccc | 539 | |
310bc633 MJ |
540 | fprintf (f, "\nLattices:\n"); |
541 | FOR_EACH_FUNCTION_WITH_GIMPLE_BODY (node) | |
518dc859 | 542 | { |
99b1c316 | 543 | class ipa_node_params *info; |
0eae6bab | 544 | |
0eae6bab | 545 | info = IPA_NODE_REF (node); |
9ee46552 MP |
546 | /* Skip constprop clones since we don't make lattices for them. */ |
547 | if (info->ipcp_orig_node) | |
548 | continue; | |
464d0118 | 549 | fprintf (f, " Node: %s:\n", node->dump_name ()); |
c43f07af | 550 | count = ipa_get_param_count (info); |
518dc859 RL |
551 | for (i = 0; i < count; i++) |
552 | { | |
2c9561b5 | 553 | struct ipcp_agg_lattice *aglat; |
99b1c316 | 554 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
ca30a539 | 555 | fprintf (f, " param [%d]: ", i); |
c0cb5055 | 556 | plats->itself.print (f, dump_sources, dump_benefits); |
44210a96 MJ |
557 | fprintf (f, " ctxs: "); |
558 | plats->ctxlat.print (f, dump_sources, dump_benefits); | |
209ca542 | 559 | plats->bits_lattice.print (f); |
8bc5448f KV |
560 | fprintf (f, " "); |
561 | plats->m_value_range.print (f); | |
562 | fprintf (f, "\n"); | |
2c9561b5 MJ |
563 | if (plats->virt_call) |
564 | fprintf (f, " virt_call flag set\n"); | |
565 | ||
566 | if (plats->aggs_bottom) | |
310bc633 | 567 | { |
2c9561b5 | 568 | fprintf (f, " AGGS BOTTOM\n"); |
310bc633 MJ |
569 | continue; |
570 | } | |
2c9561b5 MJ |
571 | if (plats->aggs_contain_variable) |
572 | fprintf (f, " AGGS VARIABLE\n"); | |
573 | for (aglat = plats->aggs; aglat; aglat = aglat->next) | |
310bc633 | 574 | { |
2c9561b5 MJ |
575 | fprintf (f, " %soffset " HOST_WIDE_INT_PRINT_DEC ": ", |
576 | plats->aggs_by_ref ? "ref " : "", aglat->offset); | |
c0cb5055 | 577 | aglat->print (f, dump_sources, dump_benefits); |
310bc633 | 578 | } |
518dc859 RL |
579 | } |
580 | } | |
581 | } | |
582 | ||
310bc633 MJ |
583 | /* Determine whether it is at all technically possible to create clones of NODE |
584 | and store this information in the ipa_node_params structure associated | |
585 | with NODE. */ | |
27dbd3ac | 586 | |
310bc633 | 587 | static void |
7e729474 | 588 | determine_versionability (struct cgraph_node *node, |
99b1c316 | 589 | class ipa_node_params *info) |
27dbd3ac | 590 | { |
310bc633 | 591 | const char *reason = NULL; |
0818c24c | 592 | |
aa229804 MJ |
593 | /* There are a number of generic reasons functions cannot be versioned. We |
594 | also cannot remove parameters if there are type attributes such as fnspec | |
595 | present. */ | |
67348ccc | 596 | if (node->alias || node->thunk.thunk_p) |
310bc633 | 597 | reason = "alias or thunk"; |
87f94429 | 598 | else if (!node->versionable) |
d7da5cc8 | 599 | reason = "not a tree_versionable_function"; |
d52f5295 | 600 | else if (node->get_availability () <= AVAIL_INTERPOSABLE) |
310bc633 | 601 | reason = "insufficient body availability"; |
d31d42c7 JJ |
602 | else if (!opt_for_fn (node->decl, optimize) |
603 | || !opt_for_fn (node->decl, flag_ipa_cp)) | |
604 | reason = "non-optimized function"; | |
0136f8f0 AH |
605 | else if (lookup_attribute ("omp declare simd", DECL_ATTRIBUTES (node->decl))) |
606 | { | |
607 | /* Ideally we should clone the SIMD clones themselves and create | |
608 | vector copies of them, so IPA-cp and SIMD clones can happily | |
609 | coexist, but that may not be worth the effort. */ | |
610 | reason = "function has SIMD clones"; | |
611 | } | |
58b3986e ES |
612 | else if (lookup_attribute ("target_clones", DECL_ATTRIBUTES (node->decl))) |
613 | { | |
614 | /* Ideally we should clone the target clones themselves and create | |
615 | copies of them, so IPA-cp and target clones can happily | |
616 | coexist, but that may not be worth the effort. */ | |
617 | reason = "function target_clones attribute"; | |
618 | } | |
1f26ac87 JM |
619 | /* Don't clone decls local to a comdat group; it breaks and for C++ |
620 | decloned constructors, inlining is always better anyway. */ | |
d52f5295 | 621 | else if (node->comdat_local_p ()) |
1f26ac87 | 622 | reason = "comdat-local function"; |
58928b35 ML |
623 | else if (node->calls_comdat_local) |
624 | { | |
625 | /* TODO: call is versionable if we make sure that all | |
626 | callers are inside of a comdat group. */ | |
627 | reason = "calls comdat-local function"; | |
628 | } | |
27dbd3ac | 629 | |
ea49d40b | 630 | /* Functions calling BUILT_IN_VA_ARG_PACK and BUILT_IN_VA_ARG_PACK_LEN |
5d4991da JH |
631 | work only when inlined. Cloning them may still lead to better code |
632 | because ipa-cp will not give up on cloning further. If the function is | |
633 | external this however leads to wrong code because we may end up producing | |
ea49d40b JH |
634 | offline copy of the function. */ |
635 | if (DECL_EXTERNAL (node->decl)) | |
636 | for (cgraph_edge *edge = node->callees; !reason && edge; | |
637 | edge = edge->next_callee) | |
3d78e008 | 638 | if (fndecl_built_in_p (edge->callee->decl, BUILT_IN_NORMAL)) |
ea49d40b JH |
639 | { |
640 | if (DECL_FUNCTION_CODE (edge->callee->decl) == BUILT_IN_VA_ARG_PACK) | |
641 | reason = "external function which calls va_arg_pack"; | |
642 | if (DECL_FUNCTION_CODE (edge->callee->decl) | |
643 | == BUILT_IN_VA_ARG_PACK_LEN) | |
644 | reason = "external function which calls va_arg_pack_len"; | |
645 | } | |
646 | ||
67348ccc | 647 | if (reason && dump_file && !node->alias && !node->thunk.thunk_p) |
464d0118 ML |
648 | fprintf (dump_file, "Function %s is not versionable, reason: %s.\n", |
649 | node->dump_name (), reason); | |
27dbd3ac | 650 | |
7e729474 | 651 | info->versionable = (reason == NULL); |
27dbd3ac RH |
652 | } |
653 | ||
310bc633 MJ |
654 | /* Return true if it is at all technically possible to create clones of a |
655 | NODE. */ | |
656 | ||
ca30a539 | 657 | static bool |
310bc633 | 658 | ipcp_versionable_function_p (struct cgraph_node *node) |
ca30a539 | 659 | { |
6cf67b62 | 660 | return IPA_NODE_REF (node) && IPA_NODE_REF (node)->versionable; |
310bc633 | 661 | } |
ca30a539 | 662 | |
310bc633 | 663 | /* Structure holding accumulated information about callers of a node. */ |
749f25d8 | 664 | |
310bc633 MJ |
665 | struct caller_statistics |
666 | { | |
3995f3a2 | 667 | profile_count count_sum; |
310bc633 MJ |
668 | int n_calls, n_hot_calls, freq_sum; |
669 | }; | |
ca30a539 | 670 | |
310bc633 | 671 | /* Initialize fields of STAT to zeroes. */ |
530f3a1b | 672 | |
310bc633 MJ |
673 | static inline void |
674 | init_caller_stats (struct caller_statistics *stats) | |
675 | { | |
3995f3a2 | 676 | stats->count_sum = profile_count::zero (); |
310bc633 MJ |
677 | stats->n_calls = 0; |
678 | stats->n_hot_calls = 0; | |
679 | stats->freq_sum = 0; | |
680 | } | |
681 | ||
682 | /* Worker callback of cgraph_for_node_and_aliases accumulating statistics of | |
683 | non-thunk incoming edges to NODE. */ | |
684 | ||
685 | static bool | |
686 | gather_caller_stats (struct cgraph_node *node, void *data) | |
687 | { | |
688 | struct caller_statistics *stats = (struct caller_statistics *) data; | |
689 | struct cgraph_edge *cs; | |
690 | ||
691 | for (cs = node->callers; cs; cs = cs->next_caller) | |
94a2f772 | 692 | if (!cs->caller->thunk.thunk_p) |
310bc633 | 693 | { |
1bad9c18 JH |
694 | if (cs->count.ipa ().initialized_p ()) |
695 | stats->count_sum += cs->count.ipa (); | |
696 | stats->freq_sum += cs->frequency (); | |
310bc633 | 697 | stats->n_calls++; |
3dafb85c | 698 | if (cs->maybe_hot_p ()) |
310bc633 MJ |
699 | stats->n_hot_calls ++; |
700 | } | |
701 | return false; | |
702 | ||
703 | } | |
704 | ||
705 | /* Return true if this NODE is viable candidate for cloning. */ | |
706 | ||
707 | static bool | |
708 | ipcp_cloning_candidate_p (struct cgraph_node *node) | |
709 | { | |
710 | struct caller_statistics stats; | |
711 | ||
d52f5295 | 712 | gcc_checking_assert (node->has_gimple_body_p ()); |
b8698a0f | 713 | |
2bf86c84 | 714 | if (!opt_for_fn (node->decl, flag_ipa_cp_clone)) |
ca30a539 JH |
715 | { |
716 | if (dump_file) | |
155c9907 | 717 | fprintf (dump_file, "Not considering %s for cloning; " |
310bc633 | 718 | "-fipa-cp-clone disabled.\n", |
155c9907 | 719 | node->name ()); |
ca30a539 JH |
720 | return false; |
721 | } | |
ca30a539 | 722 | |
5af56ae8 | 723 | if (node->optimize_for_size_p ()) |
ca30a539 JH |
724 | { |
725 | if (dump_file) | |
155c9907 | 726 | fprintf (dump_file, "Not considering %s for cloning; " |
310bc633 | 727 | "optimizing it for size.\n", |
155c9907 | 728 | node->name ()); |
ca30a539 JH |
729 | return false; |
730 | } | |
731 | ||
310bc633 | 732 | init_caller_stats (&stats); |
d52f5295 | 733 | node->call_for_symbol_thunks_and_aliases (gather_caller_stats, &stats, false); |
310bc633 | 734 | |
f658ad30 | 735 | if (ipa_size_summaries->get (node)->self_size < stats.n_calls) |
ca30a539 JH |
736 | { |
737 | if (dump_file) | |
155c9907 JJ |
738 | fprintf (dump_file, "Considering %s for cloning; code might shrink.\n", |
739 | node->name ()); | |
310bc633 | 740 | return true; |
ca30a539 JH |
741 | } |
742 | ||
743 | /* When profile is available and function is hot, propagate into it even if | |
744 | calls seems cold; constant propagation can improve function's speed | |
61502ca8 | 745 | significantly. */ |
3995f3a2 | 746 | if (max_count > profile_count::zero ()) |
ca30a539 | 747 | { |
1bad9c18 | 748 | if (stats.count_sum > node->count.ipa ().apply_scale (90, 100)) |
ca30a539 JH |
749 | { |
750 | if (dump_file) | |
310bc633 MJ |
751 | fprintf (dump_file, "Considering %s for cloning; " |
752 | "usually called directly.\n", | |
fec39fa6 | 753 | node->name ()); |
ca30a539 | 754 | return true; |
155c9907 | 755 | } |
ca30a539 | 756 | } |
310bc633 | 757 | if (!stats.n_hot_calls) |
ca30a539 JH |
758 | { |
759 | if (dump_file) | |
760 | fprintf (dump_file, "Not considering %s for cloning; no hot calls.\n", | |
fec39fa6 | 761 | node->name ()); |
ed102b70 | 762 | return false; |
ca30a539 JH |
763 | } |
764 | if (dump_file) | |
765 | fprintf (dump_file, "Considering %s for cloning.\n", | |
fec39fa6 | 766 | node->name ()); |
ca30a539 JH |
767 | return true; |
768 | } | |
769 | ||
c0cb5055 MJ |
770 | template <typename valtype> |
771 | class value_topo_info | |
772 | { | |
773 | public: | |
774 | /* Head of the linked list of topologically sorted values. */ | |
775 | ipcp_value<valtype> *values_topo; | |
776 | /* Stack for creating SCCs, represented by a linked list too. */ | |
777 | ipcp_value<valtype> *stack; | |
778 | /* Counter driving the algorithm in add_val_to_toposort. */ | |
779 | int dfs_counter; | |
780 | ||
781 | value_topo_info () : values_topo (NULL), stack (NULL), dfs_counter (0) | |
782 | {} | |
783 | void add_val (ipcp_value<valtype> *cur_val); | |
784 | void propagate_effects (); | |
785 | }; | |
786 | ||
310bc633 | 787 | /* Arrays representing a topological ordering of call graph nodes and a stack |
c0cb5055 MJ |
788 | of nodes used during constant propagation and also data required to perform |
789 | topological sort of values and propagation of benefits in the determined | |
790 | order. */ | |
3949c4a7 | 791 | |
c0cb5055 | 792 | class ipa_topo_info |
3949c4a7 | 793 | { |
c0cb5055 MJ |
794 | public: |
795 | /* Array with obtained topological order of cgraph nodes. */ | |
310bc633 | 796 | struct cgraph_node **order; |
c0cb5055 MJ |
797 | /* Stack of cgraph nodes used during propagation within SCC until all values |
798 | in the SCC stabilize. */ | |
310bc633 MJ |
799 | struct cgraph_node **stack; |
800 | int nnodes, stack_top; | |
c0cb5055 MJ |
801 | |
802 | value_topo_info<tree> constants; | |
44210a96 | 803 | value_topo_info<ipa_polymorphic_call_context> contexts; |
c0cb5055 MJ |
804 | |
805 | ipa_topo_info () : order(NULL), stack(NULL), nnodes(0), stack_top(0), | |
806 | constants () | |
807 | {} | |
310bc633 MJ |
808 | }; |
809 | ||
97e59627 ML |
810 | /* Skip edges from and to nodes without ipa_cp enabled. |
811 | Ignore not available symbols. */ | |
812 | ||
813 | static bool | |
814 | ignore_edge_p (cgraph_edge *e) | |
815 | { | |
816 | enum availability avail; | |
817 | cgraph_node *ultimate_target | |
818 | = e->callee->function_or_virtual_thunk_symbol (&avail, e->caller); | |
819 | ||
820 | return (avail <= AVAIL_INTERPOSABLE | |
e72763e2 | 821 | || !opt_for_fn (ultimate_target->decl, optimize) |
97e59627 ML |
822 | || !opt_for_fn (ultimate_target->decl, flag_ipa_cp)); |
823 | } | |
824 | ||
310bc633 MJ |
825 | /* Allocate the arrays in TOPO and topologically sort the nodes into order. */ |
826 | ||
827 | static void | |
99b1c316 | 828 | build_toporder_info (class ipa_topo_info *topo) |
310bc633 | 829 | { |
3dafb85c ML |
830 | topo->order = XCNEWVEC (struct cgraph_node *, symtab->cgraph_count); |
831 | topo->stack = XCNEWVEC (struct cgraph_node *, symtab->cgraph_count); | |
832 | ||
c0cb5055 | 833 | gcc_checking_assert (topo->stack_top == 0); |
97e59627 ML |
834 | topo->nnodes = ipa_reduced_postorder (topo->order, true, |
835 | ignore_edge_p); | |
3949c4a7 MJ |
836 | } |
837 | ||
310bc633 MJ |
838 | /* Free information about strongly connected components and the arrays in |
839 | TOPO. */ | |
840 | ||
518dc859 | 841 | static void |
99b1c316 | 842 | free_toporder_info (class ipa_topo_info *topo) |
310bc633 MJ |
843 | { |
844 | ipa_free_postorder_info (); | |
845 | free (topo->order); | |
846 | free (topo->stack); | |
847 | } | |
848 | ||
849 | /* Add NODE to the stack in TOPO, unless it is already there. */ | |
850 | ||
851 | static inline void | |
99b1c316 | 852 | push_node_to_stack (class ipa_topo_info *topo, struct cgraph_node *node) |
518dc859 | 853 | { |
99b1c316 | 854 | class ipa_node_params *info = IPA_NODE_REF (node); |
310bc633 MJ |
855 | if (info->node_enqueued) |
856 | return; | |
857 | info->node_enqueued = 1; | |
858 | topo->stack[topo->stack_top++] = node; | |
859 | } | |
518dc859 | 860 | |
310bc633 MJ |
861 | /* Pop a node from the stack in TOPO and return it or return NULL if the stack |
862 | is empty. */ | |
ca30a539 | 863 | |
310bc633 | 864 | static struct cgraph_node * |
99b1c316 | 865 | pop_node_from_stack (class ipa_topo_info *topo) |
310bc633 MJ |
866 | { |
867 | if (topo->stack_top) | |
3949c4a7 | 868 | { |
310bc633 MJ |
869 | struct cgraph_node *node; |
870 | topo->stack_top--; | |
871 | node = topo->stack[topo->stack_top]; | |
872 | IPA_NODE_REF (node)->node_enqueued = 0; | |
873 | return node; | |
3949c4a7 | 874 | } |
310bc633 MJ |
875 | else |
876 | return NULL; | |
518dc859 RL |
877 | } |
878 | ||
310bc633 MJ |
879 | /* Set lattice LAT to bottom and return true if it previously was not set as |
880 | such. */ | |
881 | ||
c0cb5055 MJ |
882 | template <typename valtype> |
883 | inline bool | |
884 | ipcp_lattice<valtype>::set_to_bottom () | |
518dc859 | 885 | { |
c0cb5055 MJ |
886 | bool ret = !bottom; |
887 | bottom = true; | |
310bc633 MJ |
888 | return ret; |
889 | } | |
518dc859 | 890 | |
310bc633 MJ |
891 | /* Mark lattice as containing an unknown value and return true if it previously |
892 | was not marked as such. */ | |
129a37fc | 893 | |
c0cb5055 MJ |
894 | template <typename valtype> |
895 | inline bool | |
896 | ipcp_lattice<valtype>::set_contains_variable () | |
310bc633 | 897 | { |
c0cb5055 MJ |
898 | bool ret = !contains_variable; |
899 | contains_variable = true; | |
310bc633 | 900 | return ret; |
518dc859 RL |
901 | } |
902 | ||
f25ae20e | 903 | /* Set all aggregate lattices in PLATS to bottom and return true if they were |
2c9561b5 MJ |
904 | not previously set as such. */ |
905 | ||
906 | static inline bool | |
99b1c316 | 907 | set_agg_lats_to_bottom (class ipcp_param_lattices *plats) |
2c9561b5 MJ |
908 | { |
909 | bool ret = !plats->aggs_bottom; | |
910 | plats->aggs_bottom = true; | |
911 | return ret; | |
912 | } | |
913 | ||
f25ae20e | 914 | /* Mark all aggregate lattices in PLATS as containing an unknown value and |
2c9561b5 MJ |
915 | return true if they were not previously marked as such. */ |
916 | ||
917 | static inline bool | |
99b1c316 | 918 | set_agg_lats_contain_variable (class ipcp_param_lattices *plats) |
2c9561b5 MJ |
919 | { |
920 | bool ret = !plats->aggs_contain_variable; | |
921 | plats->aggs_contain_variable = true; | |
922 | return ret; | |
923 | } | |
924 | ||
8bc5448f KV |
925 | bool |
926 | ipcp_vr_lattice::meet_with (const ipcp_vr_lattice &other) | |
927 | { | |
928 | return meet_with_1 (&other.m_vr); | |
929 | } | |
930 | ||
f25ae20e | 931 | /* Meet the current value of the lattice with value range described by VR |
8bc5448f KV |
932 | lattice. */ |
933 | ||
934 | bool | |
028d81b1 | 935 | ipcp_vr_lattice::meet_with (const value_range *p_vr) |
8bc5448f KV |
936 | { |
937 | return meet_with_1 (p_vr); | |
938 | } | |
939 | ||
54994253 AH |
940 | /* Meet the current value of the lattice with value range described by |
941 | OTHER_VR lattice. Return TRUE if anything changed. */ | |
8bc5448f KV |
942 | |
943 | bool | |
028d81b1 | 944 | ipcp_vr_lattice::meet_with_1 (const value_range *other_vr) |
8bc5448f | 945 | { |
8bc5448f KV |
946 | if (bottom_p ()) |
947 | return false; | |
948 | ||
54994253 | 949 | if (other_vr->varying_p ()) |
8bc5448f KV |
950 | return set_to_bottom (); |
951 | ||
028d81b1 | 952 | value_range save (m_vr); |
54994253 | 953 | m_vr.union_ (other_vr); |
ff361cc6 | 954 | return !m_vr.equal_p (save); |
8bc5448f KV |
955 | } |
956 | ||
957 | /* Return true if value range information in the lattice is yet unknown. */ | |
958 | ||
959 | bool | |
960 | ipcp_vr_lattice::top_p () const | |
961 | { | |
54994253 | 962 | return m_vr.undefined_p (); |
8bc5448f KV |
963 | } |
964 | ||
965 | /* Return true if value range information in the lattice is known to be | |
966 | unusable. */ | |
967 | ||
968 | bool | |
969 | ipcp_vr_lattice::bottom_p () const | |
970 | { | |
54994253 | 971 | return m_vr.varying_p (); |
8bc5448f KV |
972 | } |
973 | ||
974 | /* Set value range information in the lattice to bottom. Return true if it | |
975 | previously was in a different state. */ | |
976 | ||
977 | bool | |
978 | ipcp_vr_lattice::set_to_bottom () | |
979 | { | |
54994253 | 980 | if (m_vr.varying_p ()) |
8bc5448f | 981 | return false; |
97ecc8d5 AH |
982 | /* ?? We create all sorts of VARYING ranges for floats, structures, |
983 | and other types which we cannot handle as ranges. We should | |
984 | probably avoid handling them throughout the pass, but it's easier | |
985 | to create a sensible VARYING here and let the lattice | |
986 | propagate. */ | |
987 | m_vr.set_varying (integer_type_node); | |
8bc5448f KV |
988 | return true; |
989 | } | |
990 | ||
209ca542 PK |
991 | /* Set lattice value to bottom, if it already isn't the case. */ |
992 | ||
993 | bool | |
994 | ipcp_bits_lattice::set_to_bottom () | |
995 | { | |
996 | if (bottom_p ()) | |
997 | return false; | |
998 | m_lattice_val = IPA_BITS_VARYING; | |
999 | m_value = 0; | |
1000 | m_mask = -1; | |
1001 | return true; | |
1002 | } | |
1003 | ||
1004 | /* Set to constant if it isn't already. Only meant to be called | |
1005 | when switching state from TOP. */ | |
1006 | ||
1007 | bool | |
1008 | ipcp_bits_lattice::set_to_constant (widest_int value, widest_int mask) | |
1009 | { | |
1010 | gcc_assert (top_p ()); | |
1011 | m_lattice_val = IPA_BITS_CONSTANT; | |
1012 | m_value = value; | |
1013 | m_mask = mask; | |
1014 | return true; | |
1015 | } | |
1016 | ||
1017 | /* Convert operand to value, mask form. */ | |
1018 | ||
1019 | void | |
1020 | ipcp_bits_lattice::get_value_and_mask (tree operand, widest_int *valuep, widest_int *maskp) | |
1021 | { | |
1022 | wide_int get_nonzero_bits (const_tree); | |
1023 | ||
1024 | if (TREE_CODE (operand) == INTEGER_CST) | |
1025 | { | |
155c9907 | 1026 | *valuep = wi::to_widest (operand); |
209ca542 PK |
1027 | *maskp = 0; |
1028 | } | |
1029 | else | |
1030 | { | |
1031 | *valuep = 0; | |
1032 | *maskp = -1; | |
1033 | } | |
1034 | } | |
1035 | ||
1036 | /* Meet operation, similar to ccp_lattice_meet, we xor values | |
1037 | if this->value, value have different values at same bit positions, we want | |
1038 | to drop that bit to varying. Return true if mask is changed. | |
1039 | This function assumes that the lattice value is in CONSTANT state */ | |
1040 | ||
1041 | bool | |
1042 | ipcp_bits_lattice::meet_with_1 (widest_int value, widest_int mask, | |
1043 | unsigned precision) | |
1044 | { | |
1045 | gcc_assert (constant_p ()); | |
155c9907 JJ |
1046 | |
1047 | widest_int old_mask = m_mask; | |
209ca542 PK |
1048 | m_mask = (m_mask | mask) | (m_value ^ value); |
1049 | ||
1050 | if (wi::sext (m_mask, precision) == -1) | |
1051 | return set_to_bottom (); | |
1052 | ||
1053 | return m_mask != old_mask; | |
1054 | } | |
1055 | ||
1056 | /* Meet the bits lattice with operand | |
1057 | described by <value, mask, sgn, precision. */ | |
1058 | ||
1059 | bool | |
1060 | ipcp_bits_lattice::meet_with (widest_int value, widest_int mask, | |
1061 | unsigned precision) | |
1062 | { | |
1063 | if (bottom_p ()) | |
1064 | return false; | |
1065 | ||
1066 | if (top_p ()) | |
1067 | { | |
1068 | if (wi::sext (mask, precision) == -1) | |
1069 | return set_to_bottom (); | |
155c9907 | 1070 | return set_to_constant (value, mask); |
209ca542 PK |
1071 | } |
1072 | ||
1073 | return meet_with_1 (value, mask, precision); | |
1074 | } | |
1075 | ||
1076 | /* Meet bits lattice with the result of bit_value_binop (other, operand) | |
1077 | if code is binary operation or bit_value_unop (other) if code is unary op. | |
1078 | In the case when code is nop_expr, no adjustment is required. */ | |
1079 | ||
1080 | bool | |
1081 | ipcp_bits_lattice::meet_with (ipcp_bits_lattice& other, unsigned precision, | |
1082 | signop sgn, enum tree_code code, tree operand) | |
1083 | { | |
1084 | if (other.bottom_p ()) | |
1085 | return set_to_bottom (); | |
1086 | ||
1087 | if (bottom_p () || other.top_p ()) | |
1088 | return false; | |
1089 | ||
1090 | widest_int adjusted_value, adjusted_mask; | |
1091 | ||
1092 | if (TREE_CODE_CLASS (code) == tcc_binary) | |
1093 | { | |
1094 | tree type = TREE_TYPE (operand); | |
209ca542 PK |
1095 | widest_int o_value, o_mask; |
1096 | get_value_and_mask (operand, &o_value, &o_mask); | |
1097 | ||
1098 | bit_value_binop (code, sgn, precision, &adjusted_value, &adjusted_mask, | |
1099 | sgn, precision, other.get_value (), other.get_mask (), | |
1100 | TYPE_SIGN (type), TYPE_PRECISION (type), o_value, o_mask); | |
1101 | ||
1102 | if (wi::sext (adjusted_mask, precision) == -1) | |
1103 | return set_to_bottom (); | |
1104 | } | |
1105 | ||
1106 | else if (TREE_CODE_CLASS (code) == tcc_unary) | |
1107 | { | |
1108 | bit_value_unop (code, sgn, precision, &adjusted_value, | |
1109 | &adjusted_mask, sgn, precision, other.get_value (), | |
1110 | other.get_mask ()); | |
1111 | ||
1112 | if (wi::sext (adjusted_mask, precision) == -1) | |
1113 | return set_to_bottom (); | |
1114 | } | |
1115 | ||
209ca542 PK |
1116 | else |
1117 | return set_to_bottom (); | |
1118 | ||
1119 | if (top_p ()) | |
1120 | { | |
1121 | if (wi::sext (adjusted_mask, precision) == -1) | |
1122 | return set_to_bottom (); | |
155c9907 | 1123 | return set_to_constant (adjusted_value, adjusted_mask); |
209ca542 PK |
1124 | } |
1125 | else | |
1126 | return meet_with_1 (adjusted_value, adjusted_mask, precision); | |
1127 | } | |
1128 | ||
2c9561b5 MJ |
1129 | /* Mark bot aggregate and scalar lattices as containing an unknown variable, |
1130 | return true is any of them has not been marked as such so far. */ | |
1131 | ||
1132 | static inline bool | |
99b1c316 | 1133 | set_all_contains_variable (class ipcp_param_lattices *plats) |
2c9561b5 | 1134 | { |
44210a96 MJ |
1135 | bool ret; |
1136 | ret = plats->itself.set_contains_variable (); | |
1137 | ret |= plats->ctxlat.set_contains_variable (); | |
1138 | ret |= set_agg_lats_contain_variable (plats); | |
209ca542 | 1139 | ret |= plats->bits_lattice.set_to_bottom (); |
8bc5448f | 1140 | ret |= plats->m_value_range.set_to_bottom (); |
2c9561b5 MJ |
1141 | return ret; |
1142 | } | |
1143 | ||
af21714c MJ |
1144 | /* Worker of call_for_symbol_thunks_and_aliases, increment the integer DATA |
1145 | points to by the number of callers to NODE. */ | |
1146 | ||
1147 | static bool | |
1148 | count_callers (cgraph_node *node, void *data) | |
1149 | { | |
1150 | int *caller_count = (int *) data; | |
1151 | ||
1152 | for (cgraph_edge *cs = node->callers; cs; cs = cs->next_caller) | |
67914693 | 1153 | /* Local thunks can be handled transparently, but if the thunk cannot |
af21714c | 1154 | be optimized out, count it as a real use. */ |
87f94429 | 1155 | if (!cs->caller->thunk.thunk_p || !cs->caller->local) |
af21714c MJ |
1156 | ++*caller_count; |
1157 | return false; | |
1158 | } | |
1159 | ||
1160 | /* Worker of call_for_symbol_thunks_and_aliases, it is supposed to be called on | |
1161 | the one caller of some other node. Set the caller's corresponding flag. */ | |
1162 | ||
1163 | static bool | |
1164 | set_single_call_flag (cgraph_node *node, void *) | |
1165 | { | |
1166 | cgraph_edge *cs = node->callers; | |
1167 | /* Local thunks can be handled transparently, skip them. */ | |
87f94429 | 1168 | while (cs && cs->caller->thunk.thunk_p && cs->caller->local) |
af21714c | 1169 | cs = cs->next_caller; |
68188fff | 1170 | if (cs && IPA_NODE_REF (cs->caller)) |
af21714c | 1171 | { |
af21714c MJ |
1172 | IPA_NODE_REF (cs->caller)->node_calling_single_call = true; |
1173 | return true; | |
1174 | } | |
1175 | return false; | |
1176 | } | |
1177 | ||
310bc633 | 1178 | /* Initialize ipcp_lattices. */ |
43558bcc | 1179 | |
518dc859 | 1180 | static void |
310bc633 | 1181 | initialize_node_lattices (struct cgraph_node *node) |
518dc859 | 1182 | { |
99b1c316 | 1183 | class ipa_node_params *info = IPA_NODE_REF (node); |
310bc633 MJ |
1184 | struct cgraph_edge *ie; |
1185 | bool disable = false, variable = false; | |
1186 | int i; | |
518dc859 | 1187 | |
d52f5295 | 1188 | gcc_checking_assert (node->has_gimple_body_p ()); |
ff6686d2 MJ |
1189 | |
1190 | if (!ipa_get_param_count (info)) | |
1191 | disable = true; | |
87f94429 | 1192 | else if (node->local) |
af21714c MJ |
1193 | { |
1194 | int caller_count = 0; | |
1195 | node->call_for_symbol_thunks_and_aliases (count_callers, &caller_count, | |
1196 | true); | |
1197 | gcc_checking_assert (caller_count > 0); | |
1198 | if (caller_count == 1) | |
1199 | node->call_for_symbol_thunks_and_aliases (set_single_call_flag, | |
1200 | NULL, true); | |
1201 | } | |
1202 | else | |
310bc633 MJ |
1203 | { |
1204 | /* When cloning is allowed, we can assume that externally visible | |
1205 | functions are not called. We will compensate this by cloning | |
1206 | later. */ | |
1207 | if (ipcp_versionable_function_p (node) | |
1208 | && ipcp_cloning_candidate_p (node)) | |
1209 | variable = true; | |
1210 | else | |
1211 | disable = true; | |
1212 | } | |
518dc859 | 1213 | |
ff6686d2 MJ |
1214 | if (dump_file && (dump_flags & TDF_DETAILS) |
1215 | && !node->alias && !node->thunk.thunk_p) | |
8bc5448f | 1216 | { |
ff6686d2 MJ |
1217 | fprintf (dump_file, "Initializing lattices of %s\n", |
1218 | node->dump_name ()); | |
1219 | if (disable || variable) | |
1220 | fprintf (dump_file, " Marking all lattices as %s\n", | |
1221 | disable ? "BOTTOM" : "VARIABLE"); | |
8bc5448f KV |
1222 | } |
1223 | ||
ff6686d2 MJ |
1224 | auto_vec<bool, 16> surviving_params; |
1225 | bool pre_modified = false; | |
1226 | if (!disable && node->clone.param_adjustments) | |
310bc633 | 1227 | { |
ff6686d2 MJ |
1228 | /* At the moment all IPA optimizations should use the number of |
1229 | parameters of the prevailing decl as the m_always_copy_start. | |
1230 | Handling any other value would complicate the code below, so for the | |
1231 | time bing let's only assert it is so. */ | |
1232 | gcc_assert ((node->clone.param_adjustments->m_always_copy_start | |
1233 | == ipa_get_param_count (info)) | |
1234 | || node->clone.param_adjustments->m_always_copy_start < 0); | |
1235 | ||
1236 | pre_modified = true; | |
1237 | node->clone.param_adjustments->get_surviving_params (&surviving_params); | |
1238 | ||
1239 | if (dump_file && (dump_flags & TDF_DETAILS) | |
1240 | && !node->alias && !node->thunk.thunk_p) | |
310bc633 | 1241 | { |
ff6686d2 MJ |
1242 | bool first = true; |
1243 | for (int j = 0; j < ipa_get_param_count (info); j++) | |
2c9561b5 | 1244 | { |
ff6686d2 MJ |
1245 | if (j < (int) surviving_params.length () |
1246 | && surviving_params[j]) | |
1247 | continue; | |
1248 | if (first) | |
1249 | { | |
1250 | fprintf (dump_file, | |
1251 | " The following parameters are dead on arrival:"); | |
1252 | first = false; | |
1253 | } | |
1254 | fprintf (dump_file, " %u", j); | |
2c9561b5 | 1255 | } |
ff6686d2 MJ |
1256 | if (!first) |
1257 | fprintf (dump_file, "\n"); | |
1258 | } | |
1259 | } | |
1260 | ||
1261 | for (i = 0; i < ipa_get_param_count (info); i++) | |
1262 | { | |
1263 | ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); | |
1264 | if (disable | |
1265 | || (pre_modified && (surviving_params.length () <= (unsigned) i | |
1266 | || !surviving_params[i]))) | |
1267 | { | |
1268 | plats->itself.set_to_bottom (); | |
1269 | plats->ctxlat.set_to_bottom (); | |
1270 | set_agg_lats_to_bottom (plats); | |
1271 | plats->bits_lattice.set_to_bottom (); | |
1272 | plats->m_value_range.set_to_bottom (); | |
1273 | } | |
1274 | else | |
1275 | { | |
1276 | plats->m_value_range.init (); | |
1277 | if (variable) | |
2c9561b5 | 1278 | set_all_contains_variable (plats); |
310bc633 | 1279 | } |
310bc633 | 1280 | } |
518dc859 | 1281 | |
310bc633 | 1282 | for (ie = node->indirect_calls; ie; ie = ie->next_callee) |
1d5755ef | 1283 | if (ie->indirect_info->polymorphic |
155c9907 | 1284 | && ie->indirect_info->param_index >= 0) |
0818c24c | 1285 | { |
310bc633 | 1286 | gcc_checking_assert (ie->indirect_info->param_index >= 0); |
2c9561b5 MJ |
1287 | ipa_get_parm_lattices (info, |
1288 | ie->indirect_info->param_index)->virt_call = 1; | |
0818c24c | 1289 | } |
518dc859 RL |
1290 | } |
1291 | ||
eb270950 FX |
1292 | /* Return the result of a (possibly arithmetic) operation on the constant |
1293 | value INPUT. OPERAND is 2nd operand for binary operation. RES_TYPE is | |
1294 | the type of the parameter to which the result is passed. Return | |
1295 | NULL_TREE if that cannot be determined or be considered an | |
1296 | interprocedural invariant. */ | |
3949c4a7 | 1297 | |
310bc633 | 1298 | static tree |
eb270950 FX |
1299 | ipa_get_jf_arith_result (enum tree_code opcode, tree input, tree operand, |
1300 | tree res_type) | |
3949c4a7 | 1301 | { |
e5cf5e11 | 1302 | tree res; |
3949c4a7 | 1303 | |
eb270950 | 1304 | if (opcode == NOP_EXPR) |
310bc633 | 1305 | return input; |
04643334 MJ |
1306 | if (!is_gimple_ip_invariant (input)) |
1307 | return NULL_TREE; | |
3949c4a7 | 1308 | |
e5cf5e11 | 1309 | if (!res_type) |
a2b4c188 | 1310 | { |
e5cf5e11 PK |
1311 | if (TREE_CODE_CLASS (opcode) == tcc_comparison) |
1312 | res_type = boolean_type_node; | |
1313 | else if (expr_type_first_operand_type_p (opcode)) | |
1314 | res_type = TREE_TYPE (input); | |
a2b4c188 | 1315 | else |
e5cf5e11 | 1316 | return NULL_TREE; |
a2b4c188 | 1317 | } |
e5cf5e11 PK |
1318 | |
1319 | if (TREE_CODE_CLASS (opcode) == tcc_unary) | |
1320 | res = fold_unary (opcode, res_type, input); | |
1321 | else | |
eb270950 | 1322 | res = fold_binary (opcode, res_type, input, operand); |
e5cf5e11 | 1323 | |
310bc633 MJ |
1324 | if (res && !is_gimple_ip_invariant (res)) |
1325 | return NULL_TREE; | |
3949c4a7 | 1326 | |
310bc633 | 1327 | return res; |
3949c4a7 MJ |
1328 | } |
1329 | ||
eb270950 FX |
1330 | /* Return the result of a (possibly arithmetic) pass through jump function |
1331 | JFUNC on the constant value INPUT. RES_TYPE is the type of the parameter | |
1332 | to which the result is passed. Return NULL_TREE if that cannot be | |
1333 | determined or be considered an interprocedural invariant. */ | |
1334 | ||
1335 | static tree | |
1336 | ipa_get_jf_pass_through_result (struct ipa_jump_func *jfunc, tree input, | |
1337 | tree res_type) | |
1338 | { | |
1339 | return ipa_get_jf_arith_result (ipa_get_jf_pass_through_operation (jfunc), | |
1340 | input, | |
1341 | ipa_get_jf_pass_through_operand (jfunc), | |
1342 | res_type); | |
1343 | } | |
1344 | ||
310bc633 MJ |
1345 | /* Return the result of an ancestor jump function JFUNC on the constant value |
1346 | INPUT. Return NULL_TREE if that cannot be determined. */ | |
3949c4a7 | 1347 | |
310bc633 MJ |
1348 | static tree |
1349 | ipa_get_jf_ancestor_result (struct ipa_jump_func *jfunc, tree input) | |
3949c4a7 | 1350 | { |
44210a96 MJ |
1351 | gcc_checking_assert (TREE_CODE (input) != TREE_BINFO); |
1352 | if (TREE_CODE (input) == ADDR_EXPR) | |
3949c4a7 | 1353 | { |
310bc633 MJ |
1354 | tree t = TREE_OPERAND (input, 0); |
1355 | t = build_ref_for_offset (EXPR_LOCATION (t), t, | |
ee45a32d | 1356 | ipa_get_jf_ancestor_offset (jfunc), false, |
3b97a5c7 | 1357 | ptr_type_node, NULL, false); |
310bc633 | 1358 | return build_fold_addr_expr (t); |
3949c4a7 MJ |
1359 | } |
1360 | else | |
310bc633 MJ |
1361 | return NULL_TREE; |
1362 | } | |
3949c4a7 | 1363 | |
44210a96 MJ |
1364 | /* Determine whether JFUNC evaluates to a single known constant value and if |
1365 | so, return it. Otherwise return NULL. INFO describes the caller node or | |
1366 | the one it is inlined to, so that pass-through jump functions can be | |
e5cf5e11 PK |
1367 | evaluated. PARM_TYPE is the type of the parameter to which the result is |
1368 | passed. */ | |
310bc633 | 1369 | |
d2d668fb | 1370 | tree |
99b1c316 | 1371 | ipa_value_from_jfunc (class ipa_node_params *info, struct ipa_jump_func *jfunc, |
e5cf5e11 | 1372 | tree parm_type) |
310bc633 MJ |
1373 | { |
1374 | if (jfunc->type == IPA_JF_CONST) | |
7b872d9e | 1375 | return ipa_get_jf_constant (jfunc); |
310bc633 MJ |
1376 | else if (jfunc->type == IPA_JF_PASS_THROUGH |
1377 | || jfunc->type == IPA_JF_ANCESTOR) | |
3949c4a7 | 1378 | { |
310bc633 MJ |
1379 | tree input; |
1380 | int idx; | |
3949c4a7 | 1381 | |
310bc633 | 1382 | if (jfunc->type == IPA_JF_PASS_THROUGH) |
7b872d9e | 1383 | idx = ipa_get_jf_pass_through_formal_id (jfunc); |
310bc633 | 1384 | else |
7b872d9e | 1385 | idx = ipa_get_jf_ancestor_formal_id (jfunc); |
3949c4a7 | 1386 | |
310bc633 | 1387 | if (info->ipcp_orig_node) |
44210a96 | 1388 | input = info->known_csts[idx]; |
310bc633 | 1389 | else |
3949c4a7 | 1390 | { |
c0cb5055 | 1391 | ipcp_lattice<tree> *lat; |
310bc633 | 1392 | |
370a7814 JH |
1393 | if (!info->lattices |
1394 | || idx >= ipa_get_param_count (info)) | |
2bf86c84 | 1395 | return NULL_TREE; |
2c9561b5 | 1396 | lat = ipa_get_scalar_lat (info, idx); |
c0cb5055 | 1397 | if (!lat->is_single_const ()) |
310bc633 MJ |
1398 | return NULL_TREE; |
1399 | input = lat->values->value; | |
1400 | } | |
1401 | ||
1402 | if (!input) | |
1403 | return NULL_TREE; | |
1404 | ||
1405 | if (jfunc->type == IPA_JF_PASS_THROUGH) | |
e5cf5e11 | 1406 | return ipa_get_jf_pass_through_result (jfunc, input, parm_type); |
310bc633 | 1407 | else |
7b872d9e | 1408 | return ipa_get_jf_ancestor_result (jfunc, input); |
3949c4a7 | 1409 | } |
310bc633 MJ |
1410 | else |
1411 | return NULL_TREE; | |
3949c4a7 MJ |
1412 | } |
1413 | ||
f25ae20e | 1414 | /* Determine whether JFUNC evaluates to single known polymorphic context, given |
44210a96 MJ |
1415 | that INFO describes the caller node or the one it is inlined to, CS is the |
1416 | call graph edge corresponding to JFUNC and CSIDX index of the described | |
1417 | parameter. */ | |
1418 | ||
1419 | ipa_polymorphic_call_context | |
1420 | ipa_context_from_jfunc (ipa_node_params *info, cgraph_edge *cs, int csidx, | |
1421 | ipa_jump_func *jfunc) | |
1422 | { | |
1423 | ipa_edge_args *args = IPA_EDGE_REF (cs); | |
1424 | ipa_polymorphic_call_context ctx; | |
1425 | ipa_polymorphic_call_context *edge_ctx | |
1426 | = cs ? ipa_get_ith_polymorhic_call_context (args, csidx) : NULL; | |
1427 | ||
1428 | if (edge_ctx && !edge_ctx->useless_p ()) | |
1429 | ctx = *edge_ctx; | |
1430 | ||
1431 | if (jfunc->type == IPA_JF_PASS_THROUGH | |
1432 | || jfunc->type == IPA_JF_ANCESTOR) | |
1433 | { | |
1434 | ipa_polymorphic_call_context srcctx; | |
1435 | int srcidx; | |
df0d8136 | 1436 | bool type_preserved = true; |
44210a96 MJ |
1437 | if (jfunc->type == IPA_JF_PASS_THROUGH) |
1438 | { | |
df0d8136 | 1439 | if (ipa_get_jf_pass_through_operation (jfunc) != NOP_EXPR) |
44210a96 | 1440 | return ctx; |
df0d8136 | 1441 | type_preserved = ipa_get_jf_pass_through_type_preserved (jfunc); |
44210a96 MJ |
1442 | srcidx = ipa_get_jf_pass_through_formal_id (jfunc); |
1443 | } | |
1444 | else | |
1445 | { | |
df0d8136 | 1446 | type_preserved = ipa_get_jf_ancestor_type_preserved (jfunc); |
44210a96 MJ |
1447 | srcidx = ipa_get_jf_ancestor_formal_id (jfunc); |
1448 | } | |
1449 | if (info->ipcp_orig_node) | |
1450 | { | |
1451 | if (info->known_contexts.exists ()) | |
1452 | srcctx = info->known_contexts[srcidx]; | |
1453 | } | |
1454 | else | |
1455 | { | |
370a7814 JH |
1456 | if (!info->lattices |
1457 | || srcidx >= ipa_get_param_count (info)) | |
2bf86c84 | 1458 | return ctx; |
44210a96 MJ |
1459 | ipcp_lattice<ipa_polymorphic_call_context> *lat; |
1460 | lat = ipa_get_poly_ctx_lat (info, srcidx); | |
1461 | if (!lat->is_single_const ()) | |
1462 | return ctx; | |
1463 | srcctx = lat->values->value; | |
1464 | } | |
1465 | if (srcctx.useless_p ()) | |
1466 | return ctx; | |
1467 | if (jfunc->type == IPA_JF_ANCESTOR) | |
1468 | srcctx.offset_by (ipa_get_jf_ancestor_offset (jfunc)); | |
df0d8136 JH |
1469 | if (!type_preserved) |
1470 | srcctx.possible_dynamic_type_change (cs->in_polymorphic_cdtor); | |
1471 | srcctx.combine_with (ctx); | |
1472 | return srcctx; | |
44210a96 MJ |
1473 | } |
1474 | ||
1475 | return ctx; | |
1476 | } | |
3949c4a7 | 1477 | |
68718e8e JH |
1478 | /* Emulate effects of unary OPERATION and/or conversion from SRC_TYPE to |
1479 | DST_TYPE on value range in SRC_VR and store it to DST_VR. Return true if | |
1480 | the result is a range or an anti-range. */ | |
1481 | ||
1482 | static bool | |
1483 | ipa_vr_operation_and_type_effects (value_range *dst_vr, | |
1484 | value_range *src_vr, | |
1485 | enum tree_code operation, | |
1486 | tree dst_type, tree src_type) | |
1487 | { | |
1488 | range_fold_unary_expr (dst_vr, operation, dst_type, src_vr, src_type); | |
1489 | if (dst_vr->varying_p () || dst_vr->undefined_p ()) | |
1490 | return false; | |
1491 | return true; | |
1492 | } | |
1493 | ||
1494 | /* Determine value_range of JFUNC given that INFO describes the caller node or | |
1495 | the one it is inlined to, CS is the call graph edge corresponding to JFUNC | |
1496 | and PARM_TYPE of the parameter. */ | |
1497 | ||
1498 | value_range | |
1499 | ipa_value_range_from_jfunc (ipa_node_params *info, cgraph_edge *cs, | |
1500 | ipa_jump_func *jfunc, tree parm_type) | |
1501 | { | |
1502 | value_range vr; | |
1503 | return vr; | |
1504 | if (jfunc->m_vr) | |
1505 | ipa_vr_operation_and_type_effects (&vr, | |
1506 | jfunc->m_vr, | |
1507 | NOP_EXPR, parm_type, | |
1508 | jfunc->m_vr->type ()); | |
1509 | if (vr.singleton_p ()) | |
1510 | return vr; | |
1511 | if (jfunc->type == IPA_JF_PASS_THROUGH) | |
1512 | { | |
1513 | int idx; | |
1514 | ipcp_transformation *sum | |
1515 | = ipcp_get_transformation_summary (cs->caller->inlined_to | |
1516 | ? cs->caller->inlined_to | |
1517 | : cs->caller); | |
1518 | if (!sum || !sum->m_vr) | |
1519 | return vr; | |
1520 | ||
1521 | idx = ipa_get_jf_pass_through_formal_id (jfunc); | |
1522 | ||
1523 | if (!(*sum->m_vr)[idx].known) | |
1524 | return vr; | |
1525 | tree vr_type = ipa_get_type (info, idx); | |
1526 | value_range srcvr (wide_int_to_tree (vr_type, (*sum->m_vr)[idx].min), | |
1527 | wide_int_to_tree (vr_type, (*sum->m_vr)[idx].max), | |
1528 | (*sum->m_vr)[idx].type); | |
1529 | ||
1530 | enum tree_code operation = ipa_get_jf_pass_through_operation (jfunc); | |
1531 | ||
1532 | if (TREE_CODE_CLASS (operation) == tcc_unary) | |
1533 | { | |
1534 | value_range res; | |
1535 | ||
1536 | if (ipa_vr_operation_and_type_effects (&res, | |
1537 | &srcvr, | |
1538 | operation, parm_type, | |
1539 | vr_type)) | |
1540 | vr.intersect (res); | |
1541 | } | |
1542 | else | |
1543 | { | |
1544 | value_range op_res, res; | |
1545 | tree op = ipa_get_jf_pass_through_operand (jfunc); | |
1546 | value_range op_vr (op, op); | |
1547 | ||
1548 | range_fold_binary_expr (&op_res, operation, vr_type, &srcvr, &op_vr); | |
1549 | if (ipa_vr_operation_and_type_effects (&res, | |
1550 | &op_res, | |
1551 | NOP_EXPR, parm_type, | |
1552 | vr_type)) | |
1553 | vr.intersect (res); | |
1554 | } | |
1555 | } | |
1556 | return vr; | |
1557 | } | |
1558 | ||
eb270950 FX |
1559 | /* See if NODE is a clone with a known aggregate value at a given OFFSET of a |
1560 | parameter with the given INDEX. */ | |
1561 | ||
1562 | static tree | |
1563 | get_clone_agg_value (struct cgraph_node *node, HOST_WIDE_INT offset, | |
1564 | int index) | |
1565 | { | |
1566 | struct ipa_agg_replacement_value *aggval; | |
1567 | ||
1568 | aggval = ipa_get_agg_replacements_for_node (node); | |
1569 | while (aggval) | |
1570 | { | |
1571 | if (aggval->offset == offset | |
1572 | && aggval->index == index) | |
1573 | return aggval->value; | |
1574 | aggval = aggval->next; | |
1575 | } | |
1576 | return NULL_TREE; | |
1577 | } | |
1578 | ||
1579 | /* Determine whether ITEM, jump function for an aggregate part, evaluates to a | |
1580 | single known constant value and if so, return it. Otherwise return NULL. | |
1581 | NODE and INFO describes the caller node or the one it is inlined to, and | |
1582 | its related info. */ | |
1583 | ||
1584 | static tree | |
1585 | ipa_agg_value_from_node (class ipa_node_params *info, | |
1586 | struct cgraph_node *node, | |
1587 | struct ipa_agg_jf_item *item) | |
1588 | { | |
1589 | tree value = NULL_TREE; | |
1590 | int src_idx; | |
1591 | ||
1592 | if (item->offset < 0 || item->jftype == IPA_JF_UNKNOWN) | |
1593 | return NULL_TREE; | |
1594 | ||
1595 | if (item->jftype == IPA_JF_CONST) | |
1596 | return item->value.constant; | |
1597 | ||
1598 | gcc_checking_assert (item->jftype == IPA_JF_PASS_THROUGH | |
1599 | || item->jftype == IPA_JF_LOAD_AGG); | |
1600 | ||
1601 | src_idx = item->value.pass_through.formal_id; | |
1602 | ||
1603 | if (info->ipcp_orig_node) | |
1604 | { | |
1605 | if (item->jftype == IPA_JF_PASS_THROUGH) | |
1606 | value = info->known_csts[src_idx]; | |
1607 | else | |
1608 | value = get_clone_agg_value (node, item->value.load_agg.offset, | |
1609 | src_idx); | |
1610 | } | |
1611 | else if (info->lattices) | |
1612 | { | |
1613 | class ipcp_param_lattices *src_plats | |
1614 | = ipa_get_parm_lattices (info, src_idx); | |
1615 | ||
1616 | if (item->jftype == IPA_JF_PASS_THROUGH) | |
1617 | { | |
1618 | struct ipcp_lattice<tree> *lat = &src_plats->itself; | |
1619 | ||
1620 | if (!lat->is_single_const ()) | |
1621 | return NULL_TREE; | |
1622 | ||
1623 | value = lat->values->value; | |
1624 | } | |
1625 | else if (src_plats->aggs | |
1626 | && !src_plats->aggs_bottom | |
1627 | && !src_plats->aggs_contain_variable | |
1628 | && src_plats->aggs_by_ref == item->value.load_agg.by_ref) | |
1629 | { | |
1630 | struct ipcp_agg_lattice *aglat; | |
1631 | ||
1632 | for (aglat = src_plats->aggs; aglat; aglat = aglat->next) | |
1633 | { | |
1634 | if (aglat->offset > item->value.load_agg.offset) | |
1635 | break; | |
1636 | ||
1637 | if (aglat->offset == item->value.load_agg.offset) | |
1638 | { | |
1639 | if (aglat->is_single_const ()) | |
1640 | value = aglat->values->value; | |
1641 | break; | |
1642 | } | |
1643 | } | |
1644 | } | |
1645 | } | |
1646 | ||
1647 | if (!value) | |
1648 | return NULL_TREE; | |
1649 | ||
1650 | if (item->jftype == IPA_JF_LOAD_AGG) | |
1651 | { | |
1652 | tree load_type = item->value.load_agg.type; | |
1653 | tree value_type = TREE_TYPE (value); | |
1654 | ||
1655 | /* Ensure value type is compatible with load type. */ | |
1656 | if (!useless_type_conversion_p (load_type, value_type)) | |
1657 | return NULL_TREE; | |
1658 | } | |
1659 | ||
1660 | return ipa_get_jf_arith_result (item->value.pass_through.operation, | |
1661 | value, | |
1662 | item->value.pass_through.operand, | |
1663 | item->type); | |
1664 | } | |
1665 | ||
1666 | /* Determine whether AGG_JFUNC evaluates to a set of known constant value for | |
1667 | an aggregate and if so, return it. Otherwise return an empty set. NODE | |
1668 | and INFO describes the caller node or the one it is inlined to, and its | |
1669 | related info. */ | |
1670 | ||
1671 | struct ipa_agg_value_set | |
1672 | ipa_agg_value_set_from_jfunc (class ipa_node_params *info, cgraph_node *node, | |
1673 | struct ipa_agg_jump_function *agg_jfunc) | |
1674 | { | |
1675 | struct ipa_agg_value_set agg; | |
1676 | struct ipa_agg_jf_item *item; | |
1677 | int i; | |
1678 | ||
1679 | agg.items = vNULL; | |
1680 | agg.by_ref = agg_jfunc->by_ref; | |
1681 | ||
1682 | FOR_EACH_VEC_SAFE_ELT (agg_jfunc->items, i, item) | |
1683 | { | |
1684 | tree value = ipa_agg_value_from_node (info, node, item); | |
1685 | ||
1686 | if (value) | |
1687 | { | |
1688 | struct ipa_agg_value value_item; | |
1689 | ||
1690 | value_item.offset = item->offset; | |
1691 | value_item.value = value; | |
1692 | ||
1693 | agg.items.safe_push (value_item); | |
1694 | } | |
1695 | } | |
1696 | return agg; | |
1697 | } | |
1698 | ||
310bc633 MJ |
1699 | /* If checking is enabled, verify that no lattice is in the TOP state, i.e. not |
1700 | bottom, not containing a variable component and without any known value at | |
1701 | the same time. */ | |
3949c4a7 | 1702 | |
310bc633 MJ |
1703 | DEBUG_FUNCTION void |
1704 | ipcp_verify_propagated_values (void) | |
518dc859 | 1705 | { |
310bc633 | 1706 | struct cgraph_node *node; |
ca30a539 | 1707 | |
310bc633 | 1708 | FOR_EACH_FUNCTION_WITH_GIMPLE_BODY (node) |
518dc859 | 1709 | { |
99b1c316 | 1710 | class ipa_node_params *info = IPA_NODE_REF (node); |
e72763e2 JH |
1711 | if (!opt_for_fn (node->decl, flag_ipa_cp) |
1712 | || !opt_for_fn (node->decl, optimize)) | |
6cf67b62 | 1713 | continue; |
310bc633 | 1714 | int i, count = ipa_get_param_count (info); |
c43f07af | 1715 | |
310bc633 | 1716 | for (i = 0; i < count; i++) |
518dc859 | 1717 | { |
c0cb5055 | 1718 | ipcp_lattice<tree> *lat = ipa_get_scalar_lat (info, i); |
c43f07af | 1719 | |
310bc633 MJ |
1720 | if (!lat->bottom |
1721 | && !lat->contains_variable | |
1722 | && lat->values_count == 0) | |
518dc859 | 1723 | { |
310bc633 | 1724 | if (dump_file) |
518dc859 | 1725 | { |
6c52831d | 1726 | symtab->dump (dump_file); |
310bc633 | 1727 | fprintf (dump_file, "\nIPA lattices after constant " |
5bed50e8 | 1728 | "propagation, before gcc_unreachable:\n"); |
310bc633 | 1729 | print_all_lattices (dump_file, true, false); |
518dc859 | 1730 | } |
3949c4a7 | 1731 | |
310bc633 | 1732 | gcc_unreachable (); |
518dc859 RL |
1733 | } |
1734 | } | |
1735 | } | |
1736 | } | |
1737 | ||
310bc633 MJ |
1738 | /* Return true iff X and Y should be considered equal values by IPA-CP. */ |
1739 | ||
1740 | static bool | |
1741 | values_equal_for_ipcp_p (tree x, tree y) | |
1742 | { | |
1743 | gcc_checking_assert (x != NULL_TREE && y != NULL_TREE); | |
1744 | ||
1745 | if (x == y) | |
1746 | return true; | |
1747 | ||
310bc633 MJ |
1748 | if (TREE_CODE (x) == ADDR_EXPR |
1749 | && TREE_CODE (y) == ADDR_EXPR | |
1750 | && TREE_CODE (TREE_OPERAND (x, 0)) == CONST_DECL | |
1751 | && TREE_CODE (TREE_OPERAND (y, 0)) == CONST_DECL) | |
1752 | return operand_equal_p (DECL_INITIAL (TREE_OPERAND (x, 0)), | |
1753 | DECL_INITIAL (TREE_OPERAND (y, 0)), 0); | |
1754 | else | |
1755 | return operand_equal_p (x, y, 0); | |
1756 | } | |
1757 | ||
44210a96 MJ |
1758 | /* Return true iff X and Y should be considered equal contexts by IPA-CP. */ |
1759 | ||
1760 | static bool | |
1761 | values_equal_for_ipcp_p (ipa_polymorphic_call_context x, | |
1762 | ipa_polymorphic_call_context y) | |
1763 | { | |
1764 | return x.equal_to (y); | |
1765 | } | |
1766 | ||
1767 | ||
c0cb5055 MJ |
1768 | /* Add a new value source to the value represented by THIS, marking that a |
1769 | value comes from edge CS and (if the underlying jump function is a | |
1770 | pass-through or an ancestor one) from a caller value SRC_VAL of a caller | |
1771 | parameter described by SRC_INDEX. OFFSET is negative if the source was the | |
1772 | scalar value of the parameter itself or the offset within an aggregate. */ | |
310bc633 | 1773 | |
c0cb5055 MJ |
1774 | template <typename valtype> |
1775 | void | |
1776 | ipcp_value<valtype>::add_source (cgraph_edge *cs, ipcp_value *src_val, | |
1777 | int src_idx, HOST_WIDE_INT offset) | |
518dc859 | 1778 | { |
c0cb5055 | 1779 | ipcp_value_source<valtype> *src; |
ca30a539 | 1780 | |
2651e637 | 1781 | src = new (ipcp_sources_pool.allocate ()) ipcp_value_source<valtype>; |
2c9561b5 | 1782 | src->offset = offset; |
310bc633 MJ |
1783 | src->cs = cs; |
1784 | src->val = src_val; | |
1785 | src->index = src_idx; | |
fb3f88cc | 1786 | |
c0cb5055 MJ |
1787 | src->next = sources; |
1788 | sources = src; | |
310bc633 MJ |
1789 | } |
1790 | ||
c0cb5055 MJ |
1791 | /* Allocate a new ipcp_value holding a tree constant, initialize its value to |
1792 | SOURCE and clear all other fields. */ | |
310bc633 | 1793 | |
c0cb5055 MJ |
1794 | static ipcp_value<tree> * |
1795 | allocate_and_init_ipcp_value (tree source) | |
310bc633 | 1796 | { |
c0cb5055 | 1797 | ipcp_value<tree> *val; |
310bc633 | 1798 | |
c3684b7b | 1799 | val = new (ipcp_cst_values_pool.allocate ()) ipcp_value<tree>(); |
44210a96 MJ |
1800 | val->value = source; |
1801 | return val; | |
1802 | } | |
1803 | ||
1804 | /* Allocate a new ipcp_value holding a polymorphic context, initialize its | |
1805 | value to SOURCE and clear all other fields. */ | |
1806 | ||
1807 | static ipcp_value<ipa_polymorphic_call_context> * | |
1808 | allocate_and_init_ipcp_value (ipa_polymorphic_call_context source) | |
1809 | { | |
1810 | ipcp_value<ipa_polymorphic_call_context> *val; | |
1811 | ||
2651e637 | 1812 | // TODO |
c3684b7b MS |
1813 | val = new (ipcp_poly_ctx_values_pool.allocate ()) |
1814 | ipcp_value<ipa_polymorphic_call_context>(); | |
c0cb5055 MJ |
1815 | val->value = source; |
1816 | return val; | |
1817 | } | |
1818 | ||
1819 | /* Try to add NEWVAL to LAT, potentially creating a new ipcp_value for it. CS, | |
1820 | SRC_VAL SRC_INDEX and OFFSET are meant for add_source and have the same | |
1821 | meaning. OFFSET -1 means the source is scalar and not a part of an | |
9b14fc33 FX |
1822 | aggregate. If non-NULL, VAL_P records address of existing or newly added |
1823 | ipcp_value. UNLIMITED means whether value count should not exceed the limit | |
1824 | given by PARAM_IPA_CP_VALUE_LIST_SIZE. */ | |
c0cb5055 MJ |
1825 | |
1826 | template <typename valtype> | |
1827 | bool | |
1828 | ipcp_lattice<valtype>::add_value (valtype newval, cgraph_edge *cs, | |
1829 | ipcp_value<valtype> *src_val, | |
9b14fc33 FX |
1830 | int src_idx, HOST_WIDE_INT offset, |
1831 | ipcp_value<valtype> **val_p, | |
1832 | bool unlimited) | |
c0cb5055 | 1833 | { |
9b14fc33 FX |
1834 | ipcp_value<valtype> *val, *last_val = NULL; |
1835 | ||
1836 | if (val_p) | |
1837 | *val_p = NULL; | |
c0cb5055 MJ |
1838 | |
1839 | if (bottom) | |
310bc633 MJ |
1840 | return false; |
1841 | ||
9b14fc33 | 1842 | for (val = values; val; last_val = val, val = val->next) |
310bc633 MJ |
1843 | if (values_equal_for_ipcp_p (val->value, newval)) |
1844 | { | |
9b14fc33 FX |
1845 | if (val_p) |
1846 | *val_p = val; | |
1847 | ||
4cb13597 | 1848 | if (ipa_edge_within_scc (cs)) |
310bc633 | 1849 | { |
c0cb5055 | 1850 | ipcp_value_source<valtype> *s; |
155c9907 | 1851 | for (s = val->sources; s; s = s->next) |
310bc633 MJ |
1852 | if (s->cs == cs) |
1853 | break; | |
1854 | if (s) | |
1855 | return false; | |
1856 | } | |
1857 | ||
c0cb5055 | 1858 | val->add_source (cs, src_val, src_idx, offset); |
310bc633 MJ |
1859 | return false; |
1860 | } | |
1861 | ||
9b14fc33 | 1862 | if (!unlimited && values_count == param_ipa_cp_value_list_size) |
310bc633 MJ |
1863 | { |
1864 | /* We can only free sources, not the values themselves, because sources | |
026c3cfd | 1865 | of other values in this SCC might point to them. */ |
c0cb5055 | 1866 | for (val = values; val; val = val->next) |
310bc633 MJ |
1867 | { |
1868 | while (val->sources) | |
1869 | { | |
c0cb5055 | 1870 | ipcp_value_source<valtype> *src = val->sources; |
310bc633 | 1871 | val->sources = src->next; |
2651e637 | 1872 | ipcp_sources_pool.remove ((ipcp_value_source<tree>*)src); |
310bc633 MJ |
1873 | } |
1874 | } | |
c0cb5055 MJ |
1875 | values = NULL; |
1876 | return set_to_bottom (); | |
310bc633 MJ |
1877 | } |
1878 | ||
c0cb5055 MJ |
1879 | values_count++; |
1880 | val = allocate_and_init_ipcp_value (newval); | |
1881 | val->add_source (cs, src_val, src_idx, offset); | |
9b14fc33 FX |
1882 | val->next = NULL; |
1883 | ||
1884 | /* Add the new value to end of value list, which can reduce iterations | |
1885 | of propagation stage for recursive function. */ | |
1886 | if (last_val) | |
1887 | last_val->next = val; | |
1888 | else | |
1889 | values = val; | |
1890 | ||
1891 | if (val_p) | |
1892 | *val_p = val; | |
1893 | ||
1894 | return true; | |
1895 | } | |
1896 | ||
1897 | /* Return true, if a ipcp_value VAL is orginated from parameter value of | |
1898 | self-feeding recursive function by applying non-passthrough arithmetic | |
1899 | transformation. */ | |
1900 | ||
1901 | static bool | |
1902 | self_recursively_generated_p (ipcp_value<tree> *val) | |
1903 | { | |
1904 | class ipa_node_params *info = NULL; | |
1905 | ||
1906 | for (ipcp_value_source<tree> *src = val->sources; src; src = src->next) | |
1907 | { | |
1908 | cgraph_edge *cs = src->cs; | |
1909 | ||
1910 | if (!src->val || cs->caller != cs->callee->function_symbol () | |
1911 | || src->val == val) | |
1912 | return false; | |
1913 | ||
1914 | if (!info) | |
1915 | info = IPA_NODE_REF (cs->caller); | |
1916 | ||
1917 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, | |
1918 | src->index); | |
1919 | ipcp_lattice<tree> *src_lat = src->offset == -1 ? &plats->itself | |
1920 | : plats->aggs; | |
1921 | ipcp_value<tree> *src_val; | |
1922 | ||
1923 | for (src_val = src_lat->values; src_val; src_val = src_val->next) | |
1924 | if (src_val == val) | |
1925 | break; | |
1926 | ||
1927 | if (!src_val) | |
1928 | return false; | |
1929 | } | |
1930 | ||
310bc633 MJ |
1931 | return true; |
1932 | } | |
fb3f88cc | 1933 | |
9b14fc33 FX |
1934 | /* A helper function that returns result of operation specified by OPCODE on |
1935 | the value of SRC_VAL. If non-NULL, OPND1_TYPE is expected type for the | |
1936 | value of SRC_VAL. If the operation is binary, OPND2 is a constant value | |
1937 | acting as its second operand. If non-NULL, RES_TYPE is expected type of | |
1938 | the result. */ | |
1939 | ||
1940 | static tree | |
1941 | get_val_across_arith_op (enum tree_code opcode, | |
1942 | tree opnd1_type, | |
1943 | tree opnd2, | |
1944 | ipcp_value<tree> *src_val, | |
1945 | tree res_type) | |
1946 | { | |
1947 | tree opnd1 = src_val->value; | |
1948 | ||
1949 | /* Skip source values that is incompatible with specified type. */ | |
1950 | if (opnd1_type | |
1951 | && !useless_type_conversion_p (opnd1_type, TREE_TYPE (opnd1))) | |
1952 | return NULL_TREE; | |
1953 | ||
1954 | return ipa_get_jf_arith_result (opcode, opnd1, opnd2, res_type); | |
1955 | } | |
1956 | ||
eb270950 FX |
1957 | /* Propagate values through an arithmetic transformation described by a jump |
1958 | function associated with edge CS, taking values from SRC_LAT and putting | |
1959 | them into DEST_LAT. OPND1_TYPE is expected type for the values in SRC_LAT. | |
1960 | OPND2 is a constant value if transformation is a binary operation. | |
1961 | SRC_OFFSET specifies offset in an aggregate if SRC_LAT describes lattice of | |
1962 | a part of the aggregate. SRC_IDX is the index of the source parameter. | |
1963 | RES_TYPE is the value type of result being propagated into. Return true if | |
1964 | DEST_LAT changed. */ | |
310bc633 MJ |
1965 | |
1966 | static bool | |
eb270950 FX |
1967 | propagate_vals_across_arith_jfunc (cgraph_edge *cs, |
1968 | enum tree_code opcode, | |
1969 | tree opnd1_type, | |
1970 | tree opnd2, | |
1971 | ipcp_lattice<tree> *src_lat, | |
1972 | ipcp_lattice<tree> *dest_lat, | |
1973 | HOST_WIDE_INT src_offset, | |
1974 | int src_idx, | |
1975 | tree res_type) | |
310bc633 | 1976 | { |
c0cb5055 | 1977 | ipcp_value<tree> *src_val; |
310bc633 MJ |
1978 | bool ret = false; |
1979 | ||
9b14fc33 FX |
1980 | /* Due to circular dependencies, propagating within an SCC through arithmetic |
1981 | transformation would create infinite number of values. But for | |
1982 | self-feeding recursive function, we could allow propagation in a limited | |
1983 | count, and this can enable a simple kind of recursive function versioning. | |
1984 | For other scenario, we would just make lattices bottom. */ | |
eb270950 | 1985 | if (opcode != NOP_EXPR && ipa_edge_within_scc (cs)) |
9b14fc33 FX |
1986 | { |
1987 | int i; | |
1988 | ||
1989 | if (src_lat != dest_lat || param_ipa_cp_max_recursive_depth < 1) | |
1990 | return dest_lat->set_contains_variable (); | |
1991 | ||
1992 | /* No benefit if recursive execution is in low probability. */ | |
1993 | if (cs->sreal_frequency () * 100 | |
1994 | <= ((sreal) 1) * param_ipa_cp_min_recursive_probability) | |
1995 | return dest_lat->set_contains_variable (); | |
1996 | ||
1997 | auto_vec<ipcp_value<tree> *, 8> val_seeds; | |
1998 | ||
1999 | for (src_val = src_lat->values; src_val; src_val = src_val->next) | |
2000 | { | |
2001 | /* Now we do not use self-recursively generated value as propagation | |
2002 | source, this is absolutely conservative, but could avoid explosion | |
2003 | of lattice's value space, especially when one recursive function | |
2004 | calls another recursive. */ | |
2005 | if (self_recursively_generated_p (src_val)) | |
2006 | { | |
2007 | ipcp_value_source<tree> *s; | |
2008 | ||
2009 | /* If the lattice has already been propagated for the call site, | |
2010 | no need to do that again. */ | |
2011 | for (s = src_val->sources; s; s = s->next) | |
2012 | if (s->cs == cs) | |
2013 | return dest_lat->set_contains_variable (); | |
2014 | } | |
2015 | else | |
2016 | val_seeds.safe_push (src_val); | |
2017 | } | |
2018 | ||
2019 | /* Recursively generate lattice values with a limited count. */ | |
2020 | FOR_EACH_VEC_ELT (val_seeds, i, src_val) | |
2021 | { | |
2022 | for (int j = 1; j < param_ipa_cp_max_recursive_depth; j++) | |
2023 | { | |
2024 | tree cstval = get_val_across_arith_op (opcode, opnd1_type, opnd2, | |
2025 | src_val, res_type); | |
2026 | if (!cstval) | |
2027 | break; | |
2028 | ||
2029 | ret |= dest_lat->add_value (cstval, cs, src_val, src_idx, | |
2030 | src_offset, &src_val, true); | |
2031 | gcc_checking_assert (src_val); | |
2032 | } | |
2033 | } | |
2034 | ret |= dest_lat->set_contains_variable (); | |
2035 | } | |
310bc633 MJ |
2036 | else |
2037 | for (src_val = src_lat->values; src_val; src_val = src_val->next) | |
0818c24c | 2038 | { |
9b14fc33 FX |
2039 | /* Now we do not use self-recursively generated value as propagation |
2040 | source, otherwise it is easy to make value space of normal lattice | |
2041 | overflow. */ | |
2042 | if (self_recursively_generated_p (src_val)) | |
2043 | { | |
2044 | ret |= dest_lat->set_contains_variable (); | |
2045 | continue; | |
2046 | } | |
310bc633 | 2047 | |
9b14fc33 FX |
2048 | tree cstval = get_val_across_arith_op (opcode, opnd1_type, opnd2, |
2049 | src_val, res_type); | |
310bc633 | 2050 | if (cstval) |
eb270950 FX |
2051 | ret |= dest_lat->add_value (cstval, cs, src_val, src_idx, |
2052 | src_offset); | |
310bc633 | 2053 | else |
c0cb5055 | 2054 | ret |= dest_lat->set_contains_variable (); |
0818c24c | 2055 | } |
310bc633 MJ |
2056 | |
2057 | return ret; | |
2058 | } | |
2059 | ||
eb270950 FX |
2060 | /* Propagate values through a pass-through jump function JFUNC associated with |
2061 | edge CS, taking values from SRC_LAT and putting them into DEST_LAT. SRC_IDX | |
2062 | is the index of the source parameter. PARM_TYPE is the type of the | |
2063 | parameter to which the result is passed. */ | |
2064 | ||
2065 | static bool | |
2066 | propagate_vals_across_pass_through (cgraph_edge *cs, ipa_jump_func *jfunc, | |
2067 | ipcp_lattice<tree> *src_lat, | |
2068 | ipcp_lattice<tree> *dest_lat, int src_idx, | |
2069 | tree parm_type) | |
2070 | { | |
2071 | return propagate_vals_across_arith_jfunc (cs, | |
2072 | ipa_get_jf_pass_through_operation (jfunc), | |
2073 | NULL_TREE, | |
2074 | ipa_get_jf_pass_through_operand (jfunc), | |
2075 | src_lat, dest_lat, -1, src_idx, parm_type); | |
2076 | } | |
2077 | ||
310bc633 MJ |
2078 | /* Propagate values through an ancestor jump function JFUNC associated with |
2079 | edge CS, taking values from SRC_LAT and putting them into DEST_LAT. SRC_IDX | |
2080 | is the index of the source parameter. */ | |
2081 | ||
2082 | static bool | |
155c9907 JJ |
2083 | propagate_vals_across_ancestor (struct cgraph_edge *cs, |
2084 | struct ipa_jump_func *jfunc, | |
2085 | ipcp_lattice<tree> *src_lat, | |
2086 | ipcp_lattice<tree> *dest_lat, int src_idx) | |
310bc633 | 2087 | { |
c0cb5055 | 2088 | ipcp_value<tree> *src_val; |
310bc633 MJ |
2089 | bool ret = false; |
2090 | ||
4cb13597 | 2091 | if (ipa_edge_within_scc (cs)) |
c0cb5055 | 2092 | return dest_lat->set_contains_variable (); |
310bc633 MJ |
2093 | |
2094 | for (src_val = src_lat->values; src_val; src_val = src_val->next) | |
2095 | { | |
7b872d9e | 2096 | tree t = ipa_get_jf_ancestor_result (jfunc, src_val->value); |
310bc633 MJ |
2097 | |
2098 | if (t) | |
c0cb5055 | 2099 | ret |= dest_lat->add_value (t, cs, src_val, src_idx); |
310bc633 | 2100 | else |
c0cb5055 | 2101 | ret |= dest_lat->set_contains_variable (); |
310bc633 MJ |
2102 | } |
2103 | ||
2104 | return ret; | |
2105 | } | |
2106 | ||
2c9561b5 | 2107 | /* Propagate scalar values across jump function JFUNC that is associated with |
e5cf5e11 PK |
2108 | edge CS and put the values into DEST_LAT. PARM_TYPE is the type of the |
2109 | parameter to which the result is passed. */ | |
310bc633 MJ |
2110 | |
2111 | static bool | |
155c9907 JJ |
2112 | propagate_scalar_across_jump_function (struct cgraph_edge *cs, |
2113 | struct ipa_jump_func *jfunc, | |
e5cf5e11 PK |
2114 | ipcp_lattice<tree> *dest_lat, |
2115 | tree param_type) | |
310bc633 MJ |
2116 | { |
2117 | if (dest_lat->bottom) | |
2118 | return false; | |
2119 | ||
44210a96 | 2120 | if (jfunc->type == IPA_JF_CONST) |
310bc633 | 2121 | { |
44210a96 | 2122 | tree val = ipa_get_jf_constant (jfunc); |
c0cb5055 | 2123 | return dest_lat->add_value (val, cs, NULL, 0); |
310bc633 MJ |
2124 | } |
2125 | else if (jfunc->type == IPA_JF_PASS_THROUGH | |
2126 | || jfunc->type == IPA_JF_ANCESTOR) | |
2127 | { | |
99b1c316 | 2128 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
c0cb5055 | 2129 | ipcp_lattice<tree> *src_lat; |
310bc633 MJ |
2130 | int src_idx; |
2131 | bool ret; | |
2132 | ||
2133 | if (jfunc->type == IPA_JF_PASS_THROUGH) | |
7b872d9e | 2134 | src_idx = ipa_get_jf_pass_through_formal_id (jfunc); |
310bc633 | 2135 | else |
7b872d9e | 2136 | src_idx = ipa_get_jf_ancestor_formal_id (jfunc); |
310bc633 | 2137 | |
2c9561b5 | 2138 | src_lat = ipa_get_scalar_lat (caller_info, src_idx); |
310bc633 | 2139 | if (src_lat->bottom) |
c0cb5055 | 2140 | return dest_lat->set_contains_variable (); |
310bc633 MJ |
2141 | |
2142 | /* If we would need to clone the caller and cannot, do not propagate. */ | |
2143 | if (!ipcp_versionable_function_p (cs->caller) | |
2144 | && (src_lat->contains_variable | |
2145 | || (src_lat->values_count > 1))) | |
c0cb5055 | 2146 | return dest_lat->set_contains_variable (); |
310bc633 MJ |
2147 | |
2148 | if (jfunc->type == IPA_JF_PASS_THROUGH) | |
155c9907 | 2149 | ret = propagate_vals_across_pass_through (cs, jfunc, src_lat, |
e5cf5e11 | 2150 | dest_lat, src_idx, param_type); |
310bc633 | 2151 | else |
155c9907 JJ |
2152 | ret = propagate_vals_across_ancestor (cs, jfunc, src_lat, dest_lat, |
2153 | src_idx); | |
310bc633 MJ |
2154 | |
2155 | if (src_lat->contains_variable) | |
c0cb5055 | 2156 | ret |= dest_lat->set_contains_variable (); |
310bc633 MJ |
2157 | |
2158 | return ret; | |
2159 | } | |
2160 | ||
2161 | /* TODO: We currently do not handle member method pointers in IPA-CP (we only | |
2162 | use it for indirect inlining), we should propagate them too. */ | |
c0cb5055 | 2163 | return dest_lat->set_contains_variable (); |
310bc633 MJ |
2164 | } |
2165 | ||
44210a96 MJ |
2166 | /* Propagate scalar values across jump function JFUNC that is associated with |
2167 | edge CS and describes argument IDX and put the values into DEST_LAT. */ | |
2168 | ||
2169 | static bool | |
155c9907 | 2170 | propagate_context_across_jump_function (cgraph_edge *cs, |
44210a96 MJ |
2171 | ipa_jump_func *jfunc, int idx, |
2172 | ipcp_lattice<ipa_polymorphic_call_context> *dest_lat) | |
2173 | { | |
2174 | ipa_edge_args *args = IPA_EDGE_REF (cs); | |
2175 | if (dest_lat->bottom) | |
2176 | return false; | |
2177 | bool ret = false; | |
2178 | bool added_sth = false; | |
df0d8136 | 2179 | bool type_preserved = true; |
44210a96 MJ |
2180 | |
2181 | ipa_polymorphic_call_context edge_ctx, *edge_ctx_ptr | |
2182 | = ipa_get_ith_polymorhic_call_context (args, idx); | |
2183 | ||
2184 | if (edge_ctx_ptr) | |
df0d8136 | 2185 | edge_ctx = *edge_ctx_ptr; |
44210a96 MJ |
2186 | |
2187 | if (jfunc->type == IPA_JF_PASS_THROUGH | |
2188 | || jfunc->type == IPA_JF_ANCESTOR) | |
2189 | { | |
99b1c316 | 2190 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
44210a96 MJ |
2191 | int src_idx; |
2192 | ipcp_lattice<ipa_polymorphic_call_context> *src_lat; | |
2193 | ||
2194 | /* TODO: Once we figure out how to propagate speculations, it will | |
2195 | probably be a good idea to switch to speculation if type_preserved is | |
2196 | not set instead of punting. */ | |
2197 | if (jfunc->type == IPA_JF_PASS_THROUGH) | |
2198 | { | |
df0d8136 | 2199 | if (ipa_get_jf_pass_through_operation (jfunc) != NOP_EXPR) |
44210a96 | 2200 | goto prop_fail; |
df0d8136 | 2201 | type_preserved = ipa_get_jf_pass_through_type_preserved (jfunc); |
44210a96 MJ |
2202 | src_idx = ipa_get_jf_pass_through_formal_id (jfunc); |
2203 | } | |
2204 | else | |
2205 | { | |
df0d8136 | 2206 | type_preserved = ipa_get_jf_ancestor_type_preserved (jfunc); |
44210a96 MJ |
2207 | src_idx = ipa_get_jf_ancestor_formal_id (jfunc); |
2208 | } | |
2209 | ||
2210 | src_lat = ipa_get_poly_ctx_lat (caller_info, src_idx); | |
2211 | /* If we would need to clone the caller and cannot, do not propagate. */ | |
2212 | if (!ipcp_versionable_function_p (cs->caller) | |
2213 | && (src_lat->contains_variable | |
2214 | || (src_lat->values_count > 1))) | |
2215 | goto prop_fail; | |
44210a96 MJ |
2216 | |
2217 | ipcp_value<ipa_polymorphic_call_context> *src_val; | |
2218 | for (src_val = src_lat->values; src_val; src_val = src_val->next) | |
2219 | { | |
2220 | ipa_polymorphic_call_context cur = src_val->value; | |
df0d8136 JH |
2221 | |
2222 | if (!type_preserved) | |
2223 | cur.possible_dynamic_type_change (cs->in_polymorphic_cdtor); | |
44210a96 MJ |
2224 | if (jfunc->type == IPA_JF_ANCESTOR) |
2225 | cur.offset_by (ipa_get_jf_ancestor_offset (jfunc)); | |
df0d8136 JH |
2226 | /* TODO: In cases we know how the context is going to be used, |
2227 | we can improve the result by passing proper OTR_TYPE. */ | |
2228 | cur.combine_with (edge_ctx); | |
44210a96 MJ |
2229 | if (!cur.useless_p ()) |
2230 | { | |
df0d8136 JH |
2231 | if (src_lat->contains_variable |
2232 | && !edge_ctx.equal_to (cur)) | |
2233 | ret |= dest_lat->set_contains_variable (); | |
44210a96 MJ |
2234 | ret |= dest_lat->add_value (cur, cs, src_val, src_idx); |
2235 | added_sth = true; | |
2236 | } | |
2237 | } | |
44210a96 MJ |
2238 | } |
2239 | ||
2240 | prop_fail: | |
2241 | if (!added_sth) | |
2242 | { | |
2243 | if (!edge_ctx.useless_p ()) | |
2244 | ret |= dest_lat->add_value (edge_ctx, cs); | |
2245 | else | |
2246 | ret |= dest_lat->set_contains_variable (); | |
2247 | } | |
2248 | ||
2249 | return ret; | |
2250 | } | |
2251 | ||
209ca542 PK |
2252 | /* Propagate bits across jfunc that is associated with |
2253 | edge cs and update dest_lattice accordingly. */ | |
2254 | ||
2255 | bool | |
155c9907 JJ |
2256 | propagate_bits_across_jump_function (cgraph_edge *cs, int idx, |
2257 | ipa_jump_func *jfunc, | |
2258 | ipcp_bits_lattice *dest_lattice) | |
209ca542 PK |
2259 | { |
2260 | if (dest_lattice->bottom_p ()) | |
2261 | return false; | |
2262 | ||
2263 | enum availability availability; | |
2264 | cgraph_node *callee = cs->callee->function_symbol (&availability); | |
99b1c316 | 2265 | class ipa_node_params *callee_info = IPA_NODE_REF (callee); |
209ca542 PK |
2266 | tree parm_type = ipa_get_type (callee_info, idx); |
2267 | ||
b93f25ad ML |
2268 | /* For K&R C programs, ipa_get_type() could return NULL_TREE. Avoid the |
2269 | transform for these cases. Similarly, we can have bad type mismatches | |
2270 | with LTO, avoid doing anything with those too. */ | |
2271 | if (!parm_type | |
2272 | || (!INTEGRAL_TYPE_P (parm_type) && !POINTER_TYPE_P (parm_type))) | |
209ca542 PK |
2273 | { |
2274 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
b93f25ad ML |
2275 | fprintf (dump_file, "Setting dest_lattice to bottom, because type of " |
2276 | "param %i of %s is NULL or unsuitable for bits propagation\n", | |
2277 | idx, cs->callee->name ()); | |
209ca542 PK |
2278 | |
2279 | return dest_lattice->set_to_bottom (); | |
2280 | } | |
2281 | ||
2282 | unsigned precision = TYPE_PRECISION (parm_type); | |
2283 | signop sgn = TYPE_SIGN (parm_type); | |
2284 | ||
67b97478 PK |
2285 | if (jfunc->type == IPA_JF_PASS_THROUGH |
2286 | || jfunc->type == IPA_JF_ANCESTOR) | |
209ca542 | 2287 | { |
99b1c316 | 2288 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
209ca542 | 2289 | tree operand = NULL_TREE; |
67b97478 PK |
2290 | enum tree_code code; |
2291 | unsigned src_idx; | |
209ca542 | 2292 | |
67b97478 PK |
2293 | if (jfunc->type == IPA_JF_PASS_THROUGH) |
2294 | { | |
2295 | code = ipa_get_jf_pass_through_operation (jfunc); | |
2296 | src_idx = ipa_get_jf_pass_through_formal_id (jfunc); | |
2297 | if (code != NOP_EXPR) | |
2298 | operand = ipa_get_jf_pass_through_operand (jfunc); | |
2299 | } | |
2300 | else | |
2301 | { | |
155c9907 | 2302 | code = POINTER_PLUS_EXPR; |
67b97478 PK |
2303 | src_idx = ipa_get_jf_ancestor_formal_id (jfunc); |
2304 | unsigned HOST_WIDE_INT offset = ipa_get_jf_ancestor_offset (jfunc) / BITS_PER_UNIT; | |
2305 | operand = build_int_cstu (size_type_node, offset); | |
2306 | } | |
209ca542 | 2307 | |
99b1c316 | 2308 | class ipcp_param_lattices *src_lats |
209ca542 PK |
2309 | = ipa_get_parm_lattices (caller_info, src_idx); |
2310 | ||
2311 | /* Try to propagate bits if src_lattice is bottom, but jfunc is known. | |
2312 | for eg consider: | |
2313 | int f(int x) | |
2314 | { | |
2315 | g (x & 0xff); | |
2316 | } | |
2317 | Assume lattice for x is bottom, however we can still propagate | |
2318 | result of x & 0xff == 0xff, which gets computed during ccp1 pass | |
2319 | and we store it in jump function during analysis stage. */ | |
2320 | ||
2321 | if (src_lats->bits_lattice.bottom_p () | |
86cd0334 MJ |
2322 | && jfunc->bits) |
2323 | return dest_lattice->meet_with (jfunc->bits->value, jfunc->bits->mask, | |
209ca542 PK |
2324 | precision); |
2325 | else | |
2326 | return dest_lattice->meet_with (src_lats->bits_lattice, precision, sgn, | |
2327 | code, operand); | |
2328 | } | |
2329 | ||
2330 | else if (jfunc->type == IPA_JF_ANCESTOR) | |
2331 | return dest_lattice->set_to_bottom (); | |
86cd0334 MJ |
2332 | else if (jfunc->bits) |
2333 | return dest_lattice->meet_with (jfunc->bits->value, jfunc->bits->mask, | |
2334 | precision); | |
209ca542 PK |
2335 | else |
2336 | return dest_lattice->set_to_bottom (); | |
2337 | } | |
2338 | ||
8bc5448f | 2339 | /* Propagate value range across jump function JFUNC that is associated with |
5d5f1e95 KV |
2340 | edge CS with param of callee of PARAM_TYPE and update DEST_PLATS |
2341 | accordingly. */ | |
8bc5448f KV |
2342 | |
2343 | static bool | |
155c9907 | 2344 | propagate_vr_across_jump_function (cgraph_edge *cs, ipa_jump_func *jfunc, |
99b1c316 | 2345 | class ipcp_param_lattices *dest_plats, |
155c9907 | 2346 | tree param_type) |
8bc5448f | 2347 | { |
8bc5448f KV |
2348 | ipcp_vr_lattice *dest_lat = &dest_plats->m_value_range; |
2349 | ||
2350 | if (dest_lat->bottom_p ()) | |
2351 | return false; | |
2352 | ||
5d5f1e95 KV |
2353 | if (!param_type |
2354 | || (!INTEGRAL_TYPE_P (param_type) | |
2355 | && !POINTER_TYPE_P (param_type))) | |
2356 | return dest_lat->set_to_bottom (); | |
2357 | ||
8bc5448f KV |
2358 | if (jfunc->type == IPA_JF_PASS_THROUGH) |
2359 | { | |
a5e14a42 | 2360 | enum tree_code operation = ipa_get_jf_pass_through_operation (jfunc); |
2b89b748 JH |
2361 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
2362 | int src_idx = ipa_get_jf_pass_through_formal_id (jfunc); | |
2363 | class ipcp_param_lattices *src_lats | |
2364 | = ipa_get_parm_lattices (caller_info, src_idx); | |
2365 | tree operand_type = ipa_get_type (caller_info, src_idx); | |
8bc5448f | 2366 | |
2b89b748 JH |
2367 | if (src_lats->m_value_range.bottom_p ()) |
2368 | return dest_lat->set_to_bottom (); | |
2369 | ||
2370 | value_range vr; | |
a5e14a42 | 2371 | if (TREE_CODE_CLASS (operation) == tcc_unary) |
27f418b8 JJ |
2372 | ipa_vr_operation_and_type_effects (&vr, |
2373 | &src_lats->m_value_range.m_vr, | |
2374 | operation, param_type, | |
2375 | operand_type); | |
2b89b748 JH |
2376 | /* A crude way to prevent unbounded number of value range updates |
2377 | in SCC components. We should allow limited number of updates within | |
2378 | SCC, too. */ | |
2379 | else if (!ipa_edge_within_scc (cs)) | |
2380 | { | |
2381 | tree op = ipa_get_jf_pass_through_operand (jfunc); | |
2382 | value_range op_vr (op, op); | |
2383 | value_range op_res,res; | |
2384 | ||
2385 | range_fold_binary_expr (&op_res, operation, operand_type, | |
2386 | &src_lats->m_value_range.m_vr, &op_vr); | |
2387 | ipa_vr_operation_and_type_effects (&vr, | |
2388 | &op_res, | |
2389 | NOP_EXPR, param_type, | |
2390 | operand_type); | |
2391 | } | |
2392 | if (!vr.undefined_p () && !vr.varying_p ()) | |
2393 | { | |
2394 | if (jfunc->m_vr) | |
2395 | { | |
2396 | value_range jvr; | |
2397 | if (ipa_vr_operation_and_type_effects (&jvr, jfunc->m_vr, | |
2398 | NOP_EXPR, | |
2399 | param_type, | |
2400 | jfunc->m_vr->type ())) | |
27f418b8 | 2401 | vr.intersect (jvr); |
2b89b748 JH |
2402 | } |
2403 | return dest_lat->meet_with (&vr); | |
a2b4c188 | 2404 | } |
8bc5448f KV |
2405 | } |
2406 | else if (jfunc->type == IPA_JF_CONST) | |
2407 | { | |
2408 | tree val = ipa_get_jf_constant (jfunc); | |
2409 | if (TREE_CODE (val) == INTEGER_CST) | |
2410 | { | |
7d22d5a3 | 2411 | val = fold_convert (param_type, val); |
1e401340 KV |
2412 | if (TREE_OVERFLOW_P (val)) |
2413 | val = drop_tree_overflow (val); | |
86cd0334 | 2414 | |
5d462877 | 2415 | value_range tmpvr (val, val); |
86cd0334 | 2416 | return dest_lat->meet_with (&tmpvr); |
8bc5448f KV |
2417 | } |
2418 | } | |
2419 | ||
028d81b1 | 2420 | value_range vr; |
86cd0334 MJ |
2421 | if (jfunc->m_vr |
2422 | && ipa_vr_operation_and_type_effects (&vr, jfunc->m_vr, NOP_EXPR, | |
a5e14a42 | 2423 | param_type, |
54994253 | 2424 | jfunc->m_vr->type ())) |
a5e14a42 | 2425 | return dest_lat->meet_with (&vr); |
8bc5448f KV |
2426 | else |
2427 | return dest_lat->set_to_bottom (); | |
2428 | } | |
2429 | ||
2c9561b5 MJ |
2430 | /* If DEST_PLATS already has aggregate items, check that aggs_by_ref matches |
2431 | NEW_AGGS_BY_REF and if not, mark all aggs as bottoms and return true (in all | |
2432 | other cases, return false). If there are no aggregate items, set | |
2433 | aggs_by_ref to NEW_AGGS_BY_REF. */ | |
2434 | ||
2435 | static bool | |
99b1c316 | 2436 | set_check_aggs_by_ref (class ipcp_param_lattices *dest_plats, |
2c9561b5 MJ |
2437 | bool new_aggs_by_ref) |
2438 | { | |
2439 | if (dest_plats->aggs) | |
2440 | { | |
2441 | if (dest_plats->aggs_by_ref != new_aggs_by_ref) | |
2442 | { | |
2443 | set_agg_lats_to_bottom (dest_plats); | |
2444 | return true; | |
2445 | } | |
2446 | } | |
2447 | else | |
2448 | dest_plats->aggs_by_ref = new_aggs_by_ref; | |
2449 | return false; | |
2450 | } | |
2451 | ||
2452 | /* Walk aggregate lattices in DEST_PLATS from ***AGLAT on, until ***aglat is an | |
2453 | already existing lattice for the given OFFSET and SIZE, marking all skipped | |
2454 | lattices as containing variable and checking for overlaps. If there is no | |
2455 | already existing lattice for the OFFSET and VAL_SIZE, create one, initialize | |
2456 | it with offset, size and contains_variable to PRE_EXISTING, and return true, | |
2457 | unless there are too many already. If there are two many, return false. If | |
2458 | there are overlaps turn whole DEST_PLATS to bottom and return false. If any | |
2459 | skipped lattices were newly marked as containing variable, set *CHANGE to | |
2460 | true. */ | |
2461 | ||
2462 | static bool | |
99b1c316 | 2463 | merge_agg_lats_step (class ipcp_param_lattices *dest_plats, |
2c9561b5 MJ |
2464 | HOST_WIDE_INT offset, HOST_WIDE_INT val_size, |
2465 | struct ipcp_agg_lattice ***aglat, | |
2466 | bool pre_existing, bool *change) | |
2467 | { | |
2468 | gcc_checking_assert (offset >= 0); | |
2469 | ||
2470 | while (**aglat && (**aglat)->offset < offset) | |
2471 | { | |
2472 | if ((**aglat)->offset + (**aglat)->size > offset) | |
2473 | { | |
2474 | set_agg_lats_to_bottom (dest_plats); | |
2475 | return false; | |
2476 | } | |
c0cb5055 | 2477 | *change |= (**aglat)->set_contains_variable (); |
2c9561b5 MJ |
2478 | *aglat = &(**aglat)->next; |
2479 | } | |
2480 | ||
2481 | if (**aglat && (**aglat)->offset == offset) | |
2482 | { | |
b66113e9 | 2483 | if ((**aglat)->size != val_size) |
2c9561b5 MJ |
2484 | { |
2485 | set_agg_lats_to_bottom (dest_plats); | |
2486 | return false; | |
2487 | } | |
b66113e9 MJ |
2488 | gcc_assert (!(**aglat)->next |
2489 | || (**aglat)->next->offset >= offset + val_size); | |
2c9561b5 MJ |
2490 | return true; |
2491 | } | |
2492 | else | |
2493 | { | |
2494 | struct ipcp_agg_lattice *new_al; | |
2495 | ||
2496 | if (**aglat && (**aglat)->offset < offset + val_size) | |
2497 | { | |
2498 | set_agg_lats_to_bottom (dest_plats); | |
2499 | return false; | |
2500 | } | |
028d4092 | 2501 | if (dest_plats->aggs_count == param_ipa_max_agg_items) |
2c9561b5 MJ |
2502 | return false; |
2503 | dest_plats->aggs_count++; | |
2651e637 | 2504 | new_al = ipcp_agg_lattice_pool.allocate (); |
2c9561b5 MJ |
2505 | memset (new_al, 0, sizeof (*new_al)); |
2506 | ||
2507 | new_al->offset = offset; | |
2508 | new_al->size = val_size; | |
2509 | new_al->contains_variable = pre_existing; | |
2510 | ||
2511 | new_al->next = **aglat; | |
2512 | **aglat = new_al; | |
2513 | return true; | |
2514 | } | |
2515 | } | |
2516 | ||
2517 | /* Set all AGLAT and all other aggregate lattices reachable by next pointers as | |
2518 | containing an unknown value. */ | |
2519 | ||
2520 | static bool | |
2521 | set_chain_of_aglats_contains_variable (struct ipcp_agg_lattice *aglat) | |
2522 | { | |
2523 | bool ret = false; | |
2524 | while (aglat) | |
2525 | { | |
c0cb5055 | 2526 | ret |= aglat->set_contains_variable (); |
2c9561b5 MJ |
2527 | aglat = aglat->next; |
2528 | } | |
2529 | return ret; | |
2530 | } | |
2531 | ||
2532 | /* Merge existing aggregate lattices in SRC_PLATS to DEST_PLATS, subtracting | |
2533 | DELTA_OFFSET. CS is the call graph edge and SRC_IDX the index of the source | |
2534 | parameter used for lattice value sources. Return true if DEST_PLATS changed | |
2535 | in any way. */ | |
2536 | ||
2537 | static bool | |
2538 | merge_aggregate_lattices (struct cgraph_edge *cs, | |
99b1c316 MS |
2539 | class ipcp_param_lattices *dest_plats, |
2540 | class ipcp_param_lattices *src_plats, | |
2c9561b5 MJ |
2541 | int src_idx, HOST_WIDE_INT offset_delta) |
2542 | { | |
2543 | bool pre_existing = dest_plats->aggs != NULL; | |
2544 | struct ipcp_agg_lattice **dst_aglat; | |
2545 | bool ret = false; | |
2546 | ||
2547 | if (set_check_aggs_by_ref (dest_plats, src_plats->aggs_by_ref)) | |
2548 | return true; | |
2549 | if (src_plats->aggs_bottom) | |
2550 | return set_agg_lats_contain_variable (dest_plats); | |
3e452a28 MJ |
2551 | if (src_plats->aggs_contain_variable) |
2552 | ret |= set_agg_lats_contain_variable (dest_plats); | |
2c9561b5 MJ |
2553 | dst_aglat = &dest_plats->aggs; |
2554 | ||
2555 | for (struct ipcp_agg_lattice *src_aglat = src_plats->aggs; | |
2556 | src_aglat; | |
2557 | src_aglat = src_aglat->next) | |
2558 | { | |
2559 | HOST_WIDE_INT new_offset = src_aglat->offset - offset_delta; | |
2560 | ||
2561 | if (new_offset < 0) | |
2562 | continue; | |
2563 | if (merge_agg_lats_step (dest_plats, new_offset, src_aglat->size, | |
2564 | &dst_aglat, pre_existing, &ret)) | |
2565 | { | |
2566 | struct ipcp_agg_lattice *new_al = *dst_aglat; | |
2567 | ||
2568 | dst_aglat = &(*dst_aglat)->next; | |
2569 | if (src_aglat->bottom) | |
2570 | { | |
c0cb5055 | 2571 | ret |= new_al->set_contains_variable (); |
2c9561b5 MJ |
2572 | continue; |
2573 | } | |
2574 | if (src_aglat->contains_variable) | |
c0cb5055 MJ |
2575 | ret |= new_al->set_contains_variable (); |
2576 | for (ipcp_value<tree> *val = src_aglat->values; | |
2c9561b5 MJ |
2577 | val; |
2578 | val = val->next) | |
c0cb5055 MJ |
2579 | ret |= new_al->add_value (val->value, cs, val, src_idx, |
2580 | src_aglat->offset); | |
2c9561b5 MJ |
2581 | } |
2582 | else if (dest_plats->aggs_bottom) | |
2583 | return true; | |
2584 | } | |
2585 | ret |= set_chain_of_aglats_contains_variable (*dst_aglat); | |
2586 | return ret; | |
2587 | } | |
2588 | ||
324e93f1 MJ |
2589 | /* Determine whether there is anything to propagate FROM SRC_PLATS through a |
2590 | pass-through JFUNC and if so, whether it has conform and conforms to the | |
2591 | rules about propagating values passed by reference. */ | |
2592 | ||
2593 | static bool | |
99b1c316 | 2594 | agg_pass_through_permissible_p (class ipcp_param_lattices *src_plats, |
324e93f1 MJ |
2595 | struct ipa_jump_func *jfunc) |
2596 | { | |
2597 | return src_plats->aggs | |
2598 | && (!src_plats->aggs_by_ref | |
2599 | || ipa_get_jf_pass_through_agg_preserved (jfunc)); | |
2600 | } | |
2601 | ||
eb270950 FX |
2602 | /* Propagate values through ITEM, jump function for a part of an aggregate, |
2603 | into corresponding aggregate lattice AGLAT. CS is the call graph edge | |
2604 | associated with the jump function. Return true if AGLAT changed in any | |
2605 | way. */ | |
2606 | ||
2607 | static bool | |
2608 | propagate_aggregate_lattice (struct cgraph_edge *cs, | |
2609 | struct ipa_agg_jf_item *item, | |
2610 | struct ipcp_agg_lattice *aglat) | |
2611 | { | |
2612 | class ipa_node_params *caller_info; | |
2613 | class ipcp_param_lattices *src_plats; | |
2614 | struct ipcp_lattice<tree> *src_lat; | |
2615 | HOST_WIDE_INT src_offset; | |
2616 | int src_idx; | |
2617 | tree load_type; | |
2618 | bool ret; | |
2619 | ||
2620 | if (item->jftype == IPA_JF_CONST) | |
2621 | { | |
2622 | tree value = item->value.constant; | |
2623 | ||
2624 | gcc_checking_assert (is_gimple_ip_invariant (value)); | |
2625 | return aglat->add_value (value, cs, NULL, 0); | |
2626 | } | |
2627 | ||
2628 | gcc_checking_assert (item->jftype == IPA_JF_PASS_THROUGH | |
2629 | || item->jftype == IPA_JF_LOAD_AGG); | |
2630 | ||
2631 | caller_info = IPA_NODE_REF (cs->caller); | |
2632 | src_idx = item->value.pass_through.formal_id; | |
2633 | src_plats = ipa_get_parm_lattices (caller_info, src_idx); | |
2634 | ||
2635 | if (item->jftype == IPA_JF_PASS_THROUGH) | |
2636 | { | |
2637 | load_type = NULL_TREE; | |
2638 | src_lat = &src_plats->itself; | |
2639 | src_offset = -1; | |
2640 | } | |
2641 | else | |
2642 | { | |
2643 | HOST_WIDE_INT load_offset = item->value.load_agg.offset; | |
2644 | struct ipcp_agg_lattice *src_aglat; | |
2645 | ||
2646 | for (src_aglat = src_plats->aggs; src_aglat; src_aglat = src_aglat->next) | |
2647 | if (src_aglat->offset >= load_offset) | |
2648 | break; | |
2649 | ||
2650 | load_type = item->value.load_agg.type; | |
2651 | if (!src_aglat | |
2652 | || src_aglat->offset > load_offset | |
2653 | || src_aglat->size != tree_to_shwi (TYPE_SIZE (load_type)) | |
2654 | || src_plats->aggs_by_ref != item->value.load_agg.by_ref) | |
2655 | return aglat->set_contains_variable (); | |
2656 | ||
2657 | src_lat = src_aglat; | |
2658 | src_offset = load_offset; | |
2659 | } | |
2660 | ||
2661 | if (src_lat->bottom | |
2662 | || (!ipcp_versionable_function_p (cs->caller) | |
2663 | && !src_lat->is_single_const ())) | |
2664 | return aglat->set_contains_variable (); | |
2665 | ||
2666 | ret = propagate_vals_across_arith_jfunc (cs, | |
2667 | item->value.pass_through.operation, | |
2668 | load_type, | |
2669 | item->value.pass_through.operand, | |
2670 | src_lat, aglat, | |
2671 | src_offset, | |
2672 | src_idx, | |
2673 | item->type); | |
2674 | ||
2675 | if (src_lat->contains_variable) | |
2676 | ret |= aglat->set_contains_variable (); | |
2677 | ||
2678 | return ret; | |
2679 | } | |
2680 | ||
2c9561b5 MJ |
2681 | /* Propagate scalar values across jump function JFUNC that is associated with |
2682 | edge CS and put the values into DEST_LAT. */ | |
2683 | ||
2684 | static bool | |
155c9907 JJ |
2685 | propagate_aggs_across_jump_function (struct cgraph_edge *cs, |
2686 | struct ipa_jump_func *jfunc, | |
99b1c316 | 2687 | class ipcp_param_lattices *dest_plats) |
2c9561b5 MJ |
2688 | { |
2689 | bool ret = false; | |
2690 | ||
2691 | if (dest_plats->aggs_bottom) | |
2692 | return false; | |
2693 | ||
2694 | if (jfunc->type == IPA_JF_PASS_THROUGH | |
2695 | && ipa_get_jf_pass_through_operation (jfunc) == NOP_EXPR) | |
2696 | { | |
99b1c316 | 2697 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
2c9561b5 | 2698 | int src_idx = ipa_get_jf_pass_through_formal_id (jfunc); |
99b1c316 | 2699 | class ipcp_param_lattices *src_plats; |
2c9561b5 MJ |
2700 | |
2701 | src_plats = ipa_get_parm_lattices (caller_info, src_idx); | |
324e93f1 | 2702 | if (agg_pass_through_permissible_p (src_plats, jfunc)) |
2c9561b5 MJ |
2703 | { |
2704 | /* Currently we do not produce clobber aggregate jump | |
2705 | functions, replace with merging when we do. */ | |
2706 | gcc_assert (!jfunc->agg.items); | |
2707 | ret |= merge_aggregate_lattices (cs, dest_plats, src_plats, | |
2708 | src_idx, 0); | |
2709 | } | |
2710 | else | |
2711 | ret |= set_agg_lats_contain_variable (dest_plats); | |
2712 | } | |
2713 | else if (jfunc->type == IPA_JF_ANCESTOR | |
2714 | && ipa_get_jf_ancestor_agg_preserved (jfunc)) | |
2715 | { | |
99b1c316 | 2716 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
2c9561b5 | 2717 | int src_idx = ipa_get_jf_ancestor_formal_id (jfunc); |
99b1c316 | 2718 | class ipcp_param_lattices *src_plats; |
2c9561b5 MJ |
2719 | |
2720 | src_plats = ipa_get_parm_lattices (caller_info, src_idx); | |
2721 | if (src_plats->aggs && src_plats->aggs_by_ref) | |
2722 | { | |
2723 | /* Currently we do not produce clobber aggregate jump | |
2724 | functions, replace with merging when we do. */ | |
2725 | gcc_assert (!jfunc->agg.items); | |
2726 | ret |= merge_aggregate_lattices (cs, dest_plats, src_plats, src_idx, | |
2727 | ipa_get_jf_ancestor_offset (jfunc)); | |
2728 | } | |
2729 | else if (!src_plats->aggs_by_ref) | |
2730 | ret |= set_agg_lats_to_bottom (dest_plats); | |
2731 | else | |
2732 | ret |= set_agg_lats_contain_variable (dest_plats); | |
2733 | } | |
2734 | else if (jfunc->agg.items) | |
2735 | { | |
2736 | bool pre_existing = dest_plats->aggs != NULL; | |
2737 | struct ipcp_agg_lattice **aglat = &dest_plats->aggs; | |
2738 | struct ipa_agg_jf_item *item; | |
2739 | int i; | |
2740 | ||
2741 | if (set_check_aggs_by_ref (dest_plats, jfunc->agg.by_ref)) | |
2742 | return true; | |
2743 | ||
9771b263 | 2744 | FOR_EACH_VEC_ELT (*jfunc->agg.items, i, item) |
2c9561b5 MJ |
2745 | { |
2746 | HOST_WIDE_INT val_size; | |
2747 | ||
eb270950 | 2748 | if (item->offset < 0 || item->jftype == IPA_JF_UNKNOWN) |
2c9561b5 | 2749 | continue; |
eb270950 | 2750 | val_size = tree_to_shwi (TYPE_SIZE (item->type)); |
2c9561b5 MJ |
2751 | |
2752 | if (merge_agg_lats_step (dest_plats, item->offset, val_size, | |
2753 | &aglat, pre_existing, &ret)) | |
2754 | { | |
eb270950 | 2755 | ret |= propagate_aggregate_lattice (cs, item, *aglat); |
2c9561b5 MJ |
2756 | aglat = &(*aglat)->next; |
2757 | } | |
2758 | else if (dest_plats->aggs_bottom) | |
2759 | return true; | |
2760 | } | |
2761 | ||
2762 | ret |= set_chain_of_aglats_contains_variable (*aglat); | |
2763 | } | |
2764 | else | |
2765 | ret |= set_agg_lats_contain_variable (dest_plats); | |
2766 | ||
2767 | return ret; | |
2768 | } | |
2769 | ||
173b7355 MJ |
2770 | /* Return true if on the way cfrom CS->caller to the final (non-alias and |
2771 | non-thunk) destination, the call passes through a thunk. */ | |
2772 | ||
2773 | static bool | |
2774 | call_passes_through_thunk_p (cgraph_edge *cs) | |
2775 | { | |
2776 | cgraph_node *alias_or_thunk = cs->callee; | |
2777 | while (alias_or_thunk->alias) | |
2778 | alias_or_thunk = alias_or_thunk->get_alias_target (); | |
2779 | return alias_or_thunk->thunk.thunk_p; | |
2780 | } | |
2781 | ||
310bc633 MJ |
2782 | /* Propagate constants from the caller to the callee of CS. INFO describes the |
2783 | caller. */ | |
2784 | ||
2785 | static bool | |
155c9907 | 2786 | propagate_constants_across_call (struct cgraph_edge *cs) |
310bc633 | 2787 | { |
99b1c316 | 2788 | class ipa_node_params *callee_info; |
310bc633 | 2789 | enum availability availability; |
173b7355 | 2790 | cgraph_node *callee; |
99b1c316 | 2791 | class ipa_edge_args *args; |
310bc633 | 2792 | bool ret = false; |
d7da5cc8 | 2793 | int i, args_count, parms_count; |
310bc633 | 2794 | |
d52f5295 | 2795 | callee = cs->callee->function_symbol (&availability); |
67348ccc | 2796 | if (!callee->definition) |
310bc633 | 2797 | return false; |
d52f5295 | 2798 | gcc_checking_assert (callee->has_gimple_body_p ()); |
310bc633 | 2799 | callee_info = IPA_NODE_REF (callee); |
6cf67b62 JH |
2800 | if (!callee_info) |
2801 | return false; | |
310bc633 MJ |
2802 | |
2803 | args = IPA_EDGE_REF (cs); | |
d7da5cc8 | 2804 | parms_count = ipa_get_param_count (callee_info); |
f3fec19f MJ |
2805 | if (parms_count == 0) |
2806 | return false; | |
e72763e2 JH |
2807 | if (!args |
2808 | || !opt_for_fn (cs->caller->decl, flag_ipa_cp) | |
2809 | || !opt_for_fn (cs->caller->decl, optimize)) | |
a33c028e JH |
2810 | { |
2811 | for (i = 0; i < parms_count; i++) | |
2812 | ret |= set_all_contains_variable (ipa_get_parm_lattices (callee_info, | |
2813 | i)); | |
2814 | return ret; | |
2815 | } | |
2816 | args_count = ipa_get_cs_argument_count (args); | |
310bc633 MJ |
2817 | |
2818 | /* If this call goes through a thunk we must not propagate to the first (0th) | |
2819 | parameter. However, we might need to uncover a thunk from below a series | |
2820 | of aliases first. */ | |
173b7355 | 2821 | if (call_passes_through_thunk_p (cs)) |
310bc633 | 2822 | { |
2c9561b5 MJ |
2823 | ret |= set_all_contains_variable (ipa_get_parm_lattices (callee_info, |
2824 | 0)); | |
310bc633 MJ |
2825 | i = 1; |
2826 | } | |
2827 | else | |
2828 | i = 0; | |
2829 | ||
d7da5cc8 | 2830 | for (; (i < args_count) && (i < parms_count); i++) |
310bc633 MJ |
2831 | { |
2832 | struct ipa_jump_func *jump_func = ipa_get_ith_jump_func (args, i); | |
99b1c316 | 2833 | class ipcp_param_lattices *dest_plats; |
a5e14a42 | 2834 | tree param_type = ipa_get_type (callee_info, i); |
310bc633 | 2835 | |
2c9561b5 | 2836 | dest_plats = ipa_get_parm_lattices (callee_info, i); |
d52f5295 | 2837 | if (availability == AVAIL_INTERPOSABLE) |
2c9561b5 | 2838 | ret |= set_all_contains_variable (dest_plats); |
310bc633 | 2839 | else |
2c9561b5 | 2840 | { |
155c9907 | 2841 | ret |= propagate_scalar_across_jump_function (cs, jump_func, |
e5cf5e11 PK |
2842 | &dest_plats->itself, |
2843 | param_type); | |
155c9907 JJ |
2844 | ret |= propagate_context_across_jump_function (cs, jump_func, i, |
2845 | &dest_plats->ctxlat); | |
2846 | ret | |
2847 | |= propagate_bits_across_jump_function (cs, i, jump_func, | |
2848 | &dest_plats->bits_lattice); | |
2849 | ret |= propagate_aggs_across_jump_function (cs, jump_func, | |
2850 | dest_plats); | |
8bc5448f | 2851 | if (opt_for_fn (callee->decl, flag_ipa_vrp)) |
155c9907 JJ |
2852 | ret |= propagate_vr_across_jump_function (cs, jump_func, |
2853 | dest_plats, param_type); | |
8bc5448f KV |
2854 | else |
2855 | ret |= dest_plats->m_value_range.set_to_bottom (); | |
2c9561b5 | 2856 | } |
310bc633 | 2857 | } |
d7da5cc8 | 2858 | for (; i < parms_count; i++) |
2c9561b5 | 2859 | ret |= set_all_contains_variable (ipa_get_parm_lattices (callee_info, i)); |
d7da5cc8 | 2860 | |
310bc633 MJ |
2861 | return ret; |
2862 | } | |
2863 | ||
2864 | /* If an indirect edge IE can be turned into a direct one based on KNOWN_VALS | |
3b97a5c7 MJ |
2865 | KNOWN_CONTEXTS, KNOWN_AGGS or AGG_REPS return the destination. The latter |
2866 | three can be NULL. If AGG_REPS is not NULL, KNOWN_AGGS is ignored. */ | |
310bc633 | 2867 | |
162712de MJ |
2868 | static tree |
2869 | ipa_get_indirect_edge_target_1 (struct cgraph_edge *ie, | |
44210a96 MJ |
2870 | vec<tree> known_csts, |
2871 | vec<ipa_polymorphic_call_context> known_contexts, | |
eb270950 | 2872 | vec<ipa_agg_value_set> known_aggs, |
231b4916 JH |
2873 | struct ipa_agg_replacement_value *agg_reps, |
2874 | bool *speculative) | |
310bc633 MJ |
2875 | { |
2876 | int param_index = ie->indirect_info->param_index; | |
44210a96 | 2877 | HOST_WIDE_INT anc_offset; |
b0d55476 | 2878 | tree t = NULL; |
85942f45 | 2879 | tree target = NULL; |
310bc633 | 2880 | |
231b4916 JH |
2881 | *speculative = false; |
2882 | ||
b0d55476 | 2883 | if (param_index == -1) |
310bc633 MJ |
2884 | return NULL_TREE; |
2885 | ||
2886 | if (!ie->indirect_info->polymorphic) | |
2887 | { | |
b0d55476 | 2888 | tree t = NULL; |
8810cc52 MJ |
2889 | |
2890 | if (ie->indirect_info->agg_contents) | |
2891 | { | |
91bb9f80 MJ |
2892 | t = NULL; |
2893 | if (agg_reps && ie->indirect_info->guaranteed_unmodified) | |
162712de | 2894 | { |
162712de MJ |
2895 | while (agg_reps) |
2896 | { | |
2897 | if (agg_reps->index == param_index | |
7b920a9a MJ |
2898 | && agg_reps->offset == ie->indirect_info->offset |
2899 | && agg_reps->by_ref == ie->indirect_info->by_ref) | |
162712de MJ |
2900 | { |
2901 | t = agg_reps->value; | |
2902 | break; | |
2903 | } | |
2904 | agg_reps = agg_reps->next; | |
2905 | } | |
2906 | } | |
91bb9f80 | 2907 | if (!t) |
8810cc52 | 2908 | { |
eb270950 | 2909 | struct ipa_agg_value_set *agg; |
91bb9f80 | 2910 | if (known_aggs.length () > (unsigned int) param_index) |
eb270950 | 2911 | agg = &known_aggs[param_index]; |
91bb9f80 MJ |
2912 | else |
2913 | agg = NULL; | |
2914 | bool from_global_constant; | |
b0d55476 JH |
2915 | t = ipa_find_agg_cst_for_param (agg, |
2916 | (unsigned) param_index | |
2917 | < known_csts.length () | |
2918 | ? known_csts[param_index] | |
2919 | : NULL, | |
91bb9f80 MJ |
2920 | ie->indirect_info->offset, |
2921 | ie->indirect_info->by_ref, | |
2922 | &from_global_constant); | |
44a71f36 MJ |
2923 | if (t |
2924 | && !from_global_constant | |
91bb9f80 MJ |
2925 | && !ie->indirect_info->guaranteed_unmodified) |
2926 | t = NULL_TREE; | |
8810cc52 | 2927 | } |
8810cc52 | 2928 | } |
b0d55476 | 2929 | else if ((unsigned) param_index < known_csts.length ()) |
44210a96 | 2930 | t = known_csts[param_index]; |
8810cc52 | 2931 | |
155c9907 JJ |
2932 | if (t |
2933 | && TREE_CODE (t) == ADDR_EXPR | |
310bc633 | 2934 | && TREE_CODE (TREE_OPERAND (t, 0)) == FUNCTION_DECL) |
81fa35bd | 2935 | return TREE_OPERAND (t, 0); |
310bc633 MJ |
2936 | else |
2937 | return NULL_TREE; | |
2938 | } | |
2939 | ||
2bf86c84 | 2940 | if (!opt_for_fn (ie->caller->decl, flag_devirtualize)) |
85942f45 JH |
2941 | return NULL_TREE; |
2942 | ||
8810cc52 | 2943 | gcc_assert (!ie->indirect_info->agg_contents); |
8b7773a4 | 2944 | anc_offset = ie->indirect_info->offset; |
310bc633 | 2945 | |
85942f45 JH |
2946 | t = NULL; |
2947 | ||
f25ae20e | 2948 | /* Try to work out value of virtual table pointer value in replacements. */ |
231b4916 | 2949 | if (!t && agg_reps && !ie->indirect_info->by_ref) |
85942f45 JH |
2950 | { |
2951 | while (agg_reps) | |
2952 | { | |
2953 | if (agg_reps->index == param_index | |
2954 | && agg_reps->offset == ie->indirect_info->offset | |
2955 | && agg_reps->by_ref) | |
2956 | { | |
2957 | t = agg_reps->value; | |
2958 | break; | |
2959 | } | |
2960 | agg_reps = agg_reps->next; | |
2961 | } | |
2962 | } | |
2963 | ||
2964 | /* Try to work out value of virtual table pointer value in known | |
2965 | aggregate values. */ | |
2966 | if (!t && known_aggs.length () > (unsigned int) param_index | |
231b4916 | 2967 | && !ie->indirect_info->by_ref) |
85942f45 | 2968 | { |
eb270950 | 2969 | struct ipa_agg_value_set *agg = &known_aggs[param_index]; |
b0d55476 JH |
2970 | t = ipa_find_agg_cst_for_param (agg, |
2971 | (unsigned) param_index | |
2972 | < known_csts.length () | |
2973 | ? known_csts[param_index] : NULL, | |
155c9907 | 2974 | ie->indirect_info->offset, true); |
85942f45 JH |
2975 | } |
2976 | ||
9de2f554 | 2977 | /* If we found the virtual table pointer, lookup the target. */ |
85942f45 | 2978 | if (t) |
9de2f554 JH |
2979 | { |
2980 | tree vtable; | |
2981 | unsigned HOST_WIDE_INT offset; | |
2982 | if (vtable_pointer_value_to_vtable (t, &vtable, &offset)) | |
2983 | { | |
2994ab20 | 2984 | bool can_refer; |
9de2f554 | 2985 | target = gimple_get_virt_method_for_vtable (ie->indirect_info->otr_token, |
2994ab20 JH |
2986 | vtable, offset, &can_refer); |
2987 | if (can_refer) | |
9de2f554 | 2988 | { |
2994ab20 | 2989 | if (!target |
cb1180d5 | 2990 | || fndecl_built_in_p (target, BUILT_IN_UNREACHABLE) |
8472fa80 | 2991 | || !possible_polymorphic_call_target_p |
d52f5295 | 2992 | (ie, cgraph_node::get (target))) |
2994ab20 JH |
2993 | { |
2994 | /* Do not speculate builtin_unreachable, it is stupid! */ | |
2995 | if (ie->indirect_info->vptr_changed) | |
2996 | return NULL; | |
2997 | target = ipa_impossible_devirt_target (ie, target); | |
2998 | } | |
155c9907 | 2999 | *speculative = ie->indirect_info->vptr_changed; |
231b4916 | 3000 | if (!*speculative) |
155c9907 | 3001 | return target; |
9de2f554 | 3002 | } |
9de2f554 JH |
3003 | } |
3004 | } | |
85942f45 | 3005 | |
44210a96 | 3006 | /* Do we know the constant value of pointer? */ |
b0d55476 | 3007 | if (!t && (unsigned) param_index < known_csts.length ()) |
44210a96 | 3008 | t = known_csts[param_index]; |
310bc633 | 3009 | |
44210a96 MJ |
3010 | gcc_checking_assert (!t || TREE_CODE (t) != TREE_BINFO); |
3011 | ||
3012 | ipa_polymorphic_call_context context; | |
3013 | if (known_contexts.length () > (unsigned int) param_index) | |
310bc633 | 3014 | { |
44210a96 | 3015 | context = known_contexts[param_index]; |
df0d8136 JH |
3016 | context.offset_by (anc_offset); |
3017 | if (ie->indirect_info->vptr_changed) | |
3018 | context.possible_dynamic_type_change (ie->in_polymorphic_cdtor, | |
3019 | ie->indirect_info->otr_type); | |
44210a96 MJ |
3020 | if (t) |
3021 | { | |
3022 | ipa_polymorphic_call_context ctx2 = ipa_polymorphic_call_context | |
3023 | (t, ie->indirect_info->otr_type, anc_offset); | |
3024 | if (!ctx2.useless_p ()) | |
3025 | context.combine_with (ctx2, ie->indirect_info->otr_type); | |
3026 | } | |
310bc633 | 3027 | } |
44210a96 | 3028 | else if (t) |
33c3b6be JH |
3029 | { |
3030 | context = ipa_polymorphic_call_context (t, ie->indirect_info->otr_type, | |
3031 | anc_offset); | |
3032 | if (ie->indirect_info->vptr_changed) | |
3033 | context.possible_dynamic_type_change (ie->in_polymorphic_cdtor, | |
3034 | ie->indirect_info->otr_type); | |
3035 | } | |
310bc633 | 3036 | else |
44210a96 | 3037 | return NULL_TREE; |
310bc633 | 3038 | |
44210a96 MJ |
3039 | vec <cgraph_node *>targets; |
3040 | bool final; | |
3041 | ||
3042 | targets = possible_polymorphic_call_targets | |
3043 | (ie->indirect_info->otr_type, | |
3044 | ie->indirect_info->otr_token, | |
3045 | context, &final); | |
3046 | if (!final || targets.length () > 1) | |
231b4916 JH |
3047 | { |
3048 | struct cgraph_node *node; | |
3049 | if (*speculative) | |
3050 | return target; | |
2bf86c84 JH |
3051 | if (!opt_for_fn (ie->caller->decl, flag_devirtualize_speculatively) |
3052 | || ie->speculative || !ie->maybe_hot_p ()) | |
231b4916 JH |
3053 | return NULL; |
3054 | node = try_speculative_devirtualization (ie->indirect_info->otr_type, | |
3055 | ie->indirect_info->otr_token, | |
3056 | context); | |
3057 | if (node) | |
3058 | { | |
3059 | *speculative = true; | |
3060 | target = node->decl; | |
3061 | } | |
3062 | else | |
3063 | return NULL; | |
3064 | } | |
44210a96 | 3065 | else |
231b4916 JH |
3066 | { |
3067 | *speculative = false; | |
3068 | if (targets.length () == 1) | |
3069 | target = targets[0]->decl; | |
3070 | else | |
3071 | target = ipa_impossible_devirt_target (ie, NULL_TREE); | |
3072 | } | |
b5165eb0 MJ |
3073 | |
3074 | if (target && !possible_polymorphic_call_target_p (ie, | |
d52f5295 | 3075 | cgraph_node::get (target))) |
2994ab20 JH |
3076 | { |
3077 | if (*speculative) | |
3078 | return NULL; | |
3079 | target = ipa_impossible_devirt_target (ie, target); | |
3080 | } | |
450ad0cd JH |
3081 | |
3082 | return target; | |
310bc633 MJ |
3083 | } |
3084 | ||
162712de | 3085 | |
44210a96 MJ |
3086 | /* If an indirect edge IE can be turned into a direct one based on KNOWN_CSTS, |
3087 | KNOWN_CONTEXTS (which can be vNULL) or KNOWN_AGGS (which also can be vNULL) | |
3088 | return the destination. */ | |
162712de MJ |
3089 | |
3090 | tree | |
3091 | ipa_get_indirect_edge_target (struct cgraph_edge *ie, | |
44210a96 MJ |
3092 | vec<tree> known_csts, |
3093 | vec<ipa_polymorphic_call_context> known_contexts, | |
eb270950 | 3094 | vec<ipa_agg_value_set> known_aggs, |
231b4916 | 3095 | bool *speculative) |
162712de | 3096 | { |
44210a96 | 3097 | return ipa_get_indirect_edge_target_1 (ie, known_csts, known_contexts, |
231b4916 | 3098 | known_aggs, NULL, speculative); |
162712de MJ |
3099 | } |
3100 | ||
310bc633 | 3101 | /* Calculate devirtualization time bonus for NODE, assuming we know KNOWN_CSTS |
44210a96 | 3102 | and KNOWN_CONTEXTS. */ |
310bc633 MJ |
3103 | |
3104 | static int | |
3105 | devirtualization_time_bonus (struct cgraph_node *node, | |
9771b263 | 3106 | vec<tree> known_csts, |
44210a96 | 3107 | vec<ipa_polymorphic_call_context> known_contexts, |
eb270950 | 3108 | vec<ipa_agg_value_set> known_aggs) |
310bc633 MJ |
3109 | { |
3110 | struct cgraph_edge *ie; | |
3111 | int res = 0; | |
3112 | ||
3113 | for (ie = node->indirect_calls; ie; ie = ie->next_callee) | |
3114 | { | |
3115 | struct cgraph_node *callee; | |
99b1c316 | 3116 | class ipa_fn_summary *isummary; |
8ad274d2 | 3117 | enum availability avail; |
81fa35bd | 3118 | tree target; |
231b4916 | 3119 | bool speculative; |
310bc633 | 3120 | |
44210a96 | 3121 | target = ipa_get_indirect_edge_target (ie, known_csts, known_contexts, |
231b4916 | 3122 | known_aggs, &speculative); |
310bc633 MJ |
3123 | if (!target) |
3124 | continue; | |
3125 | ||
3126 | /* Only bare minimum benefit for clearly un-inlineable targets. */ | |
3127 | res += 1; | |
d52f5295 | 3128 | callee = cgraph_node::get (target); |
67348ccc | 3129 | if (!callee || !callee->definition) |
310bc633 | 3130 | continue; |
d52f5295 | 3131 | callee = callee->function_symbol (&avail); |
8ad274d2 JH |
3132 | if (avail < AVAIL_AVAILABLE) |
3133 | continue; | |
56f62793 | 3134 | isummary = ipa_fn_summaries->get (callee); |
310bc633 MJ |
3135 | if (!isummary->inlinable) |
3136 | continue; | |
3137 | ||
f658ad30 | 3138 | int size = ipa_size_summaries->get (callee)->size; |
310bc633 MJ |
3139 | /* FIXME: The values below need re-considering and perhaps also |
3140 | integrating into the cost metrics, at lest in some very basic way. */ | |
78a502ca ML |
3141 | int max_inline_insns_auto |
3142 | = opt_for_fn (callee->decl, param_max_inline_insns_auto); | |
3143 | if (size <= max_inline_insns_auto / 4) | |
231b4916 | 3144 | res += 31 / ((int)speculative + 1); |
78a502ca | 3145 | else if (size <= max_inline_insns_auto / 2) |
231b4916 | 3146 | res += 15 / ((int)speculative + 1); |
78a502ca | 3147 | else if (size <= max_inline_insns_auto |
67348ccc | 3148 | || DECL_DECLARED_INLINE_P (callee->decl)) |
231b4916 | 3149 | res += 7 / ((int)speculative + 1); |
310bc633 MJ |
3150 | } |
3151 | ||
3152 | return res; | |
3153 | } | |
3154 | ||
2c9561b5 MJ |
3155 | /* Return time bonus incurred because of HINTS. */ |
3156 | ||
3157 | static int | |
0bceb671 | 3158 | hint_time_bonus (ipa_hints hints) |
2c9561b5 | 3159 | { |
19321415 | 3160 | int result = 0; |
2c9561b5 | 3161 | if (hints & (INLINE_HINT_loop_iterations | INLINE_HINT_loop_stride)) |
028d4092 | 3162 | result += param_ipa_cp_loop_hint_bonus; |
19321415 | 3163 | return result; |
2c9561b5 MJ |
3164 | } |
3165 | ||
af21714c MJ |
3166 | /* If there is a reason to penalize the function described by INFO in the |
3167 | cloning goodness evaluation, do so. */ | |
3168 | ||
3169 | static inline int64_t | |
3170 | incorporate_penalties (ipa_node_params *info, int64_t evaluation) | |
3171 | { | |
9b14fc33 | 3172 | if (info->node_within_scc && !info->node_is_self_scc) |
af21714c | 3173 | evaluation = (evaluation |
028d4092 | 3174 | * (100 - param_ipa_cp_recursion_penalty)) / 100; |
af21714c MJ |
3175 | |
3176 | if (info->node_calling_single_call) | |
3177 | evaluation = (evaluation | |
028d4092 | 3178 | * (100 - param_ipa_cp_single_call_penalty)) |
af21714c MJ |
3179 | / 100; |
3180 | ||
3181 | return evaluation; | |
3182 | } | |
3183 | ||
310bc633 MJ |
3184 | /* Return true if cloning NODE is a good idea, given the estimated TIME_BENEFIT |
3185 | and SIZE_COST and with the sum of frequencies of incoming edges to the | |
3186 | potential new clone in FREQUENCIES. */ | |
3187 | ||
3188 | static bool | |
3189 | good_cloning_opportunity_p (struct cgraph_node *node, int time_benefit, | |
3995f3a2 | 3190 | int freq_sum, profile_count count_sum, int size_cost) |
310bc633 MJ |
3191 | { |
3192 | if (time_benefit == 0 | |
2bf86c84 | 3193 | || !opt_for_fn (node->decl, flag_ipa_cp_clone) |
5af56ae8 | 3194 | || node->optimize_for_size_p ()) |
310bc633 MJ |
3195 | return false; |
3196 | ||
df0227c4 | 3197 | gcc_assert (size_cost > 0); |
310bc633 | 3198 | |
99b1c316 | 3199 | class ipa_node_params *info = IPA_NODE_REF (node); |
3995f3a2 | 3200 | if (max_count > profile_count::zero ()) |
310bc633 | 3201 | { |
357067f2 JH |
3202 | int factor = RDIV (count_sum.probability_in |
3203 | (max_count).to_reg_br_prob_base () | |
3995f3a2 | 3204 | * 1000, REG_BR_PROB_BASE); |
a9243bfc | 3205 | int64_t evaluation = (((int64_t) time_benefit * factor) |
df0227c4 | 3206 | / size_cost); |
af21714c | 3207 | evaluation = incorporate_penalties (info, evaluation); |
310bc633 MJ |
3208 | |
3209 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
3995f3a2 JH |
3210 | { |
3211 | fprintf (dump_file, " good_cloning_opportunity_p (time: %i, " | |
3212 | "size: %i, count_sum: ", time_benefit, size_cost); | |
3213 | count_sum.dump (dump_file); | |
3214 | fprintf (dump_file, "%s%s) -> evaluation: " "%" PRId64 | |
df0227c4 | 3215 | ", threshold: %i\n", |
9b14fc33 FX |
3216 | info->node_within_scc |
3217 | ? (info->node_is_self_scc ? ", self_scc" : ", scc") : "", | |
af21714c | 3218 | info->node_calling_single_call ? ", single_call" : "", |
028d4092 | 3219 | evaluation, param_ipa_cp_eval_threshold); |
3995f3a2 | 3220 | } |
310bc633 | 3221 | |
028d4092 | 3222 | return evaluation >= param_ipa_cp_eval_threshold; |
310bc633 MJ |
3223 | } |
3224 | else | |
3225 | { | |
a9243bfc | 3226 | int64_t evaluation = (((int64_t) time_benefit * freq_sum) |
df0227c4 | 3227 | / size_cost); |
af21714c | 3228 | evaluation = incorporate_penalties (info, evaluation); |
310bc633 MJ |
3229 | |
3230 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
3231 | fprintf (dump_file, " good_cloning_opportunity_p (time: %i, " | |
af21714c | 3232 | "size: %i, freq_sum: %i%s%s) -> evaluation: " |
16998094 | 3233 | "%" PRId64 ", threshold: %i\n", |
af21714c | 3234 | time_benefit, size_cost, freq_sum, |
9b14fc33 FX |
3235 | info->node_within_scc |
3236 | ? (info->node_is_self_scc ? ", self_scc" : ", scc") : "", | |
af21714c | 3237 | info->node_calling_single_call ? ", single_call" : "", |
028d4092 | 3238 | evaluation, param_ipa_cp_eval_threshold); |
310bc633 | 3239 | |
028d4092 | 3240 | return evaluation >= param_ipa_cp_eval_threshold; |
310bc633 MJ |
3241 | } |
3242 | } | |
3243 | ||
2c9561b5 MJ |
3244 | /* Return all context independent values from aggregate lattices in PLATS in a |
3245 | vector. Return NULL if there are none. */ | |
3246 | ||
eb270950 | 3247 | static vec<ipa_agg_value> |
99b1c316 | 3248 | context_independent_aggregate_values (class ipcp_param_lattices *plats) |
2c9561b5 | 3249 | { |
eb270950 | 3250 | vec<ipa_agg_value> res = vNULL; |
2c9561b5 MJ |
3251 | |
3252 | if (plats->aggs_bottom | |
3253 | || plats->aggs_contain_variable | |
3254 | || plats->aggs_count == 0) | |
eb270950 | 3255 | return vNULL; |
2c9561b5 MJ |
3256 | |
3257 | for (struct ipcp_agg_lattice *aglat = plats->aggs; | |
3258 | aglat; | |
3259 | aglat = aglat->next) | |
c0cb5055 | 3260 | if (aglat->is_single_const ()) |
2c9561b5 | 3261 | { |
eb270950 | 3262 | struct ipa_agg_value item; |
2c9561b5 MJ |
3263 | item.offset = aglat->offset; |
3264 | item.value = aglat->values->value; | |
eb270950 | 3265 | res.safe_push (item); |
2c9561b5 MJ |
3266 | } |
3267 | return res; | |
3268 | } | |
310bc633 | 3269 | |
44210a96 MJ |
3270 | /* Allocate KNOWN_CSTS, KNOWN_CONTEXTS and, if non-NULL, KNOWN_AGGS and |
3271 | populate them with values of parameters that are known independent of the | |
3272 | context. INFO describes the function. If REMOVABLE_PARAMS_COST is | |
3273 | non-NULL, the movement cost of all removable parameters will be stored in | |
3274 | it. */ | |
310bc633 MJ |
3275 | |
3276 | static bool | |
99b1c316 | 3277 | gather_context_independent_values (class ipa_node_params *info, |
44210a96 MJ |
3278 | vec<tree> *known_csts, |
3279 | vec<ipa_polymorphic_call_context> | |
3280 | *known_contexts, | |
eb270950 | 3281 | vec<ipa_agg_value_set> *known_aggs, |
44210a96 | 3282 | int *removable_params_cost) |
310bc633 MJ |
3283 | { |
3284 | int i, count = ipa_get_param_count (info); | |
3285 | bool ret = false; | |
3286 | ||
9771b263 | 3287 | known_csts->create (0); |
44210a96 | 3288 | known_contexts->create (0); |
9771b263 | 3289 | known_csts->safe_grow_cleared (count); |
44210a96 | 3290 | known_contexts->safe_grow_cleared (count); |
2c9561b5 MJ |
3291 | if (known_aggs) |
3292 | { | |
9771b263 DN |
3293 | known_aggs->create (0); |
3294 | known_aggs->safe_grow_cleared (count); | |
2c9561b5 | 3295 | } |
310bc633 MJ |
3296 | |
3297 | if (removable_params_cost) | |
3298 | *removable_params_cost = 0; | |
3299 | ||
155c9907 | 3300 | for (i = 0; i < count; i++) |
310bc633 | 3301 | { |
99b1c316 | 3302 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
c0cb5055 | 3303 | ipcp_lattice<tree> *lat = &plats->itself; |
310bc633 | 3304 | |
c0cb5055 | 3305 | if (lat->is_single_const ()) |
310bc633 | 3306 | { |
c0cb5055 | 3307 | ipcp_value<tree> *val = lat->values; |
44210a96 MJ |
3308 | gcc_checking_assert (TREE_CODE (val->value) != TREE_BINFO); |
3309 | (*known_csts)[i] = val->value; | |
3310 | if (removable_params_cost) | |
3311 | *removable_params_cost | |
3312 | += estimate_move_cost (TREE_TYPE (val->value), false); | |
3313 | ret = true; | |
310bc633 MJ |
3314 | } |
3315 | else if (removable_params_cost | |
3316 | && !ipa_is_param_used (info, i)) | |
3317 | *removable_params_cost | |
0e8853ee | 3318 | += ipa_get_param_move_cost (info, i); |
2c9561b5 | 3319 | |
5af56ae8 JH |
3320 | if (!ipa_is_param_used (info, i)) |
3321 | continue; | |
3322 | ||
44210a96 | 3323 | ipcp_lattice<ipa_polymorphic_call_context> *ctxlat = &plats->ctxlat; |
5af56ae8 JH |
3324 | /* Do not account known context as reason for cloning. We can see |
3325 | if it permits devirtualization. */ | |
44210a96 | 3326 | if (ctxlat->is_single_const ()) |
5af56ae8 | 3327 | (*known_contexts)[i] = ctxlat->values->value; |
44210a96 | 3328 | |
2c9561b5 MJ |
3329 | if (known_aggs) |
3330 | { | |
eb270950 FX |
3331 | vec<ipa_agg_value> agg_items; |
3332 | struct ipa_agg_value_set *agg; | |
2c9561b5 MJ |
3333 | |
3334 | agg_items = context_independent_aggregate_values (plats); | |
eb270950 FX |
3335 | agg = &(*known_aggs)[i]; |
3336 | agg->items = agg_items; | |
3337 | agg->by_ref = plats->aggs_by_ref; | |
3338 | ret |= !agg_items.is_empty (); | |
2c9561b5 | 3339 | } |
310bc633 MJ |
3340 | } |
3341 | ||
3342 | return ret; | |
3343 | } | |
3344 | ||
c0cb5055 | 3345 | /* Perform time and size measurement of NODE with the context given in |
44210a96 | 3346 | KNOWN_CSTS, KNOWN_CONTEXTS and KNOWN_AGGS, calculate the benefit and cost |
c0cb5055 MJ |
3347 | given BASE_TIME of the node without specialization, REMOVABLE_PARAMS_COST of |
3348 | all context-independent removable parameters and EST_MOVE_COST of estimated | |
3349 | movement of the considered parameter and store it into VAL. */ | |
3350 | ||
3351 | static void | |
3352 | perform_estimation_of_a_value (cgraph_node *node, vec<tree> known_csts, | |
44210a96 | 3353 | vec<ipa_polymorphic_call_context> known_contexts, |
eb270950 | 3354 | vec<ipa_agg_value_set> known_aggs, |
26f1a658 | 3355 | int removable_params_cost, |
c0cb5055 MJ |
3356 | int est_move_cost, ipcp_value_base *val) |
3357 | { | |
ab38481c | 3358 | int size, time_benefit; |
26f1a658 | 3359 | sreal time, base_time; |
0bceb671 | 3360 | ipa_hints hints; |
c0cb5055 | 3361 | |
44210a96 | 3362 | estimate_ipcp_clone_size_and_time (node, known_csts, known_contexts, |
eb270950 | 3363 | known_aggs, &size, &time, |
26f1a658 | 3364 | &base_time, &hints); |
ab38481c JH |
3365 | base_time -= time; |
3366 | if (base_time > 65535) | |
3367 | base_time = 65535; | |
59d9a0aa MJ |
3368 | |
3369 | /* Extern inline functions have no cloning local time benefits because they | |
3370 | will be inlined anyway. The only reason to clone them is if it enables | |
3371 | optimization in any of the functions they call. */ | |
3372 | if (DECL_EXTERNAL (node->decl) && DECL_DECLARED_INLINE_P (node->decl)) | |
3373 | time_benefit = 0; | |
3374 | else | |
3375 | time_benefit = base_time.to_int () | |
3376 | + devirtualization_time_bonus (node, known_csts, known_contexts, | |
eb270950 | 3377 | known_aggs) |
59d9a0aa MJ |
3378 | + hint_time_bonus (hints) |
3379 | + removable_params_cost + est_move_cost; | |
c0cb5055 MJ |
3380 | |
3381 | gcc_checking_assert (size >=0); | |
3382 | /* The inliner-heuristics based estimates may think that in certain | |
3383 | contexts some functions do not have any size at all but we want | |
3384 | all specializations to have at least a tiny cost, not least not to | |
3385 | divide by zero. */ | |
3386 | if (size == 0) | |
3387 | size = 1; | |
3388 | ||
3389 | val->local_time_benefit = time_benefit; | |
3390 | val->local_size_cost = size; | |
3391 | } | |
3392 | ||
310bc633 MJ |
3393 | /* Iterate over known values of parameters of NODE and estimate the local |
3394 | effects in terms of time and size they have. */ | |
3395 | ||
3396 | static void | |
3397 | estimate_local_effects (struct cgraph_node *node) | |
3398 | { | |
99b1c316 | 3399 | class ipa_node_params *info = IPA_NODE_REF (node); |
310bc633 | 3400 | int i, count = ipa_get_param_count (info); |
44210a96 MJ |
3401 | vec<tree> known_csts; |
3402 | vec<ipa_polymorphic_call_context> known_contexts; | |
eb270950 | 3403 | vec<ipa_agg_value_set> known_aggs; |
310bc633 | 3404 | bool always_const; |
310bc633 MJ |
3405 | int removable_params_cost; |
3406 | ||
3407 | if (!count || !ipcp_versionable_function_p (node)) | |
3408 | return; | |
3409 | ||
ca30a539 | 3410 | if (dump_file && (dump_flags & TDF_DETAILS)) |
464d0118 | 3411 | fprintf (dump_file, "\nEstimating effects for %s.\n", node->dump_name ()); |
310bc633 MJ |
3412 | |
3413 | always_const = gather_context_independent_values (info, &known_csts, | |
44210a96 | 3414 | &known_contexts, &known_aggs, |
310bc633 | 3415 | &removable_params_cost); |
5af56ae8 | 3416 | int devirt_bonus = devirtualization_time_bonus (node, known_csts, |
eb270950 | 3417 | known_contexts, known_aggs); |
dcf89d57 | 3418 | if (always_const || devirt_bonus |
87f94429 | 3419 | || (removable_params_cost && node->can_change_signature)) |
ca30a539 | 3420 | { |
310bc633 | 3421 | struct caller_statistics stats; |
0bceb671 | 3422 | ipa_hints hints; |
26f1a658 | 3423 | sreal time, base_time; |
ab38481c | 3424 | int size; |
310bc633 MJ |
3425 | |
3426 | init_caller_stats (&stats); | |
d52f5295 ML |
3427 | node->call_for_symbol_thunks_and_aliases (gather_caller_stats, &stats, |
3428 | false); | |
44210a96 | 3429 | estimate_ipcp_clone_size_and_time (node, known_csts, known_contexts, |
eb270950 | 3430 | known_aggs, &size, &time, |
26f1a658 | 3431 | &base_time, &hints); |
5af56ae8 | 3432 | time -= devirt_bonus; |
2c9561b5 | 3433 | time -= hint_time_bonus (hints); |
310bc633 MJ |
3434 | time -= removable_params_cost; |
3435 | size -= stats.n_calls * removable_params_cost; | |
3436 | ||
3437 | if (dump_file) | |
3438 | fprintf (dump_file, " - context independent values, size: %i, " | |
ab38481c | 3439 | "time_benefit: %f\n", size, (base_time - time).to_double ()); |
310bc633 | 3440 | |
87f94429 | 3441 | if (size <= 0 || node->local) |
310bc633 | 3442 | { |
eb20b778 | 3443 | info->do_clone_for_all_contexts = true; |
310bc633 MJ |
3444 | |
3445 | if (dump_file) | |
3446 | fprintf (dump_file, " Decided to specialize for all " | |
3447 | "known contexts, code not going to grow.\n"); | |
3448 | } | |
26f1a658 | 3449 | else if (good_cloning_opportunity_p (node, |
5036f628 | 3450 | MIN ((base_time - time).to_int (), |
26f1a658 | 3451 | 65536), |
310bc633 MJ |
3452 | stats.freq_sum, stats.count_sum, |
3453 | size)) | |
3454 | { | |
3455 | if (size + overall_size <= max_new_size) | |
3456 | { | |
eb20b778 | 3457 | info->do_clone_for_all_contexts = true; |
310bc633 MJ |
3458 | overall_size += size; |
3459 | ||
3460 | if (dump_file) | |
3461 | fprintf (dump_file, " Decided to specialize for all " | |
3462 | "known contexts, growth deemed beneficial.\n"); | |
3463 | } | |
3464 | else if (dump_file && (dump_flags & TDF_DETAILS)) | |
3465 | fprintf (dump_file, " Not cloning for all contexts because " | |
3466 | "max_new_size would be reached with %li.\n", | |
3467 | size + overall_size); | |
3468 | } | |
5af56ae8 JH |
3469 | else if (dump_file && (dump_flags & TDF_DETAILS)) |
3470 | fprintf (dump_file, " Not cloning for all contexts because " | |
3471 | "!good_cloning_opportunity_p.\n"); | |
155c9907 | 3472 | |
ca30a539 JH |
3473 | } |
3474 | ||
155c9907 | 3475 | for (i = 0; i < count; i++) |
ca30a539 | 3476 | { |
99b1c316 | 3477 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
c0cb5055 MJ |
3478 | ipcp_lattice<tree> *lat = &plats->itself; |
3479 | ipcp_value<tree> *val; | |
310bc633 MJ |
3480 | |
3481 | if (lat->bottom | |
3482 | || !lat->values | |
44210a96 | 3483 | || known_csts[i]) |
310bc633 MJ |
3484 | continue; |
3485 | ||
3486 | for (val = lat->values; val; val = val->next) | |
3487 | { | |
44210a96 MJ |
3488 | gcc_checking_assert (TREE_CODE (val->value) != TREE_BINFO); |
3489 | known_csts[i] = val->value; | |
310bc633 | 3490 | |
44210a96 MJ |
3491 | int emc = estimate_move_cost (TREE_TYPE (val->value), true); |
3492 | perform_estimation_of_a_value (node, known_csts, known_contexts, | |
eb270950 | 3493 | known_aggs, |
c0cb5055 | 3494 | removable_params_cost, emc, val); |
0318fc77 | 3495 | |
310bc633 MJ |
3496 | if (dump_file && (dump_flags & TDF_DETAILS)) |
3497 | { | |
3498 | fprintf (dump_file, " - estimates for value "); | |
3499 | print_ipcp_constant_value (dump_file, val->value); | |
0e8853ee JH |
3500 | fprintf (dump_file, " for "); |
3501 | ipa_dump_param (dump_file, info, i); | |
310bc633 | 3502 | fprintf (dump_file, ": time_benefit: %i, size: %i\n", |
c0cb5055 | 3503 | val->local_time_benefit, val->local_size_cost); |
310bc633 | 3504 | } |
310bc633 | 3505 | } |
9771b263 | 3506 | known_csts[i] = NULL_TREE; |
2c9561b5 MJ |
3507 | } |
3508 | ||
44210a96 MJ |
3509 | for (i = 0; i < count; i++) |
3510 | { | |
99b1c316 | 3511 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
44210a96 MJ |
3512 | |
3513 | if (!plats->virt_call) | |
3514 | continue; | |
3515 | ||
3516 | ipcp_lattice<ipa_polymorphic_call_context> *ctxlat = &plats->ctxlat; | |
3517 | ipcp_value<ipa_polymorphic_call_context> *val; | |
3518 | ||
3519 | if (ctxlat->bottom | |
3520 | || !ctxlat->values | |
3521 | || !known_contexts[i].useless_p ()) | |
3522 | continue; | |
3523 | ||
3524 | for (val = ctxlat->values; val; val = val->next) | |
3525 | { | |
3526 | known_contexts[i] = val->value; | |
3527 | perform_estimation_of_a_value (node, known_csts, known_contexts, | |
eb270950 | 3528 | known_aggs, |
44210a96 MJ |
3529 | removable_params_cost, 0, val); |
3530 | ||
3531 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
3532 | { | |
3533 | fprintf (dump_file, " - estimates for polymorphic context "); | |
3534 | print_ipcp_constant_value (dump_file, val->value); | |
3535 | fprintf (dump_file, " for "); | |
3536 | ipa_dump_param (dump_file, info, i); | |
3537 | fprintf (dump_file, ": time_benefit: %i, size: %i\n", | |
3538 | val->local_time_benefit, val->local_size_cost); | |
3539 | } | |
3540 | } | |
3541 | known_contexts[i] = ipa_polymorphic_call_context (); | |
3542 | } | |
3543 | ||
155c9907 | 3544 | for (i = 0; i < count; i++) |
2c9561b5 | 3545 | { |
99b1c316 | 3546 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
eb270950 | 3547 | struct ipa_agg_value_set *agg; |
2c9561b5 MJ |
3548 | struct ipcp_agg_lattice *aglat; |
3549 | ||
3550 | if (plats->aggs_bottom || !plats->aggs) | |
3551 | continue; | |
3552 | ||
eb270950 | 3553 | agg = &known_aggs[i]; |
2c9561b5 MJ |
3554 | for (aglat = plats->aggs; aglat; aglat = aglat->next) |
3555 | { | |
c0cb5055 | 3556 | ipcp_value<tree> *val; |
2c9561b5 MJ |
3557 | if (aglat->bottom || !aglat->values |
3558 | /* If the following is true, the one value is in known_aggs. */ | |
3559 | || (!plats->aggs_contain_variable | |
c0cb5055 | 3560 | && aglat->is_single_const ())) |
2c9561b5 MJ |
3561 | continue; |
3562 | ||
3563 | for (val = aglat->values; val; val = val->next) | |
3564 | { | |
eb270950 | 3565 | struct ipa_agg_value item; |
2c9561b5 MJ |
3566 | |
3567 | item.offset = aglat->offset; | |
3568 | item.value = val->value; | |
eb270950 | 3569 | agg->items.safe_push (item); |
2c9561b5 | 3570 | |
44210a96 | 3571 | perform_estimation_of_a_value (node, known_csts, known_contexts, |
eb270950 | 3572 | known_aggs, |
c0cb5055 | 3573 | removable_params_cost, 0, val); |
2c9561b5 MJ |
3574 | |
3575 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
3576 | { | |
3577 | fprintf (dump_file, " - estimates for value "); | |
3578 | print_ipcp_constant_value (dump_file, val->value); | |
0e8853ee | 3579 | fprintf (dump_file, " for "); |
155c9907 | 3580 | ipa_dump_param (dump_file, info, i); |
2c9561b5 | 3581 | fprintf (dump_file, "[%soffset: " HOST_WIDE_INT_PRINT_DEC |
c0cb5055 MJ |
3582 | "]: time_benefit: %i, size: %i\n", |
3583 | plats->aggs_by_ref ? "ref " : "", | |
3584 | aglat->offset, | |
3585 | val->local_time_benefit, val->local_size_cost); | |
2c9561b5 MJ |
3586 | } |
3587 | ||
eb270950 | 3588 | agg->items.pop (); |
2c9561b5 MJ |
3589 | } |
3590 | } | |
3591 | } | |
3592 | ||
9771b263 | 3593 | known_csts.release (); |
44210a96 | 3594 | known_contexts.release (); |
eb270950 | 3595 | ipa_release_agg_values (known_aggs); |
310bc633 MJ |
3596 | } |
3597 | ||
3598 | ||
3599 | /* Add value CUR_VAL and all yet-unsorted values it is dependent on to the | |
3600 | topological sort of values. */ | |
3601 | ||
c0cb5055 MJ |
3602 | template <typename valtype> |
3603 | void | |
3604 | value_topo_info<valtype>::add_val (ipcp_value<valtype> *cur_val) | |
310bc633 | 3605 | { |
c0cb5055 | 3606 | ipcp_value_source<valtype> *src; |
310bc633 MJ |
3607 | |
3608 | if (cur_val->dfs) | |
3609 | return; | |
3610 | ||
3611 | dfs_counter++; | |
3612 | cur_val->dfs = dfs_counter; | |
3613 | cur_val->low_link = dfs_counter; | |
3614 | ||
3615 | cur_val->topo_next = stack; | |
3616 | stack = cur_val; | |
3617 | cur_val->on_stack = true; | |
3618 | ||
3619 | for (src = cur_val->sources; src; src = src->next) | |
3620 | if (src->val) | |
3621 | { | |
3622 | if (src->val->dfs == 0) | |
3623 | { | |
c0cb5055 | 3624 | add_val (src->val); |
310bc633 MJ |
3625 | if (src->val->low_link < cur_val->low_link) |
3626 | cur_val->low_link = src->val->low_link; | |
3627 | } | |
3628 | else if (src->val->on_stack | |
3629 | && src->val->dfs < cur_val->low_link) | |
3630 | cur_val->low_link = src->val->dfs; | |
3631 | } | |
3632 | ||
3633 | if (cur_val->dfs == cur_val->low_link) | |
ca30a539 | 3634 | { |
c0cb5055 | 3635 | ipcp_value<valtype> *v, *scc_list = NULL; |
310bc633 MJ |
3636 | |
3637 | do | |
3638 | { | |
3639 | v = stack; | |
3640 | stack = v->topo_next; | |
3641 | v->on_stack = false; | |
3642 | ||
3643 | v->scc_next = scc_list; | |
3644 | scc_list = v; | |
3645 | } | |
3646 | while (v != cur_val); | |
3647 | ||
3648 | cur_val->topo_next = values_topo; | |
3649 | values_topo = cur_val; | |
ca30a539 | 3650 | } |
518dc859 RL |
3651 | } |
3652 | ||
310bc633 MJ |
3653 | /* Add all values in lattices associated with NODE to the topological sort if |
3654 | they are not there yet. */ | |
3655 | ||
3656 | static void | |
c0cb5055 | 3657 | add_all_node_vals_to_toposort (cgraph_node *node, ipa_topo_info *topo) |
518dc859 | 3658 | { |
99b1c316 | 3659 | class ipa_node_params *info = IPA_NODE_REF (node); |
310bc633 MJ |
3660 | int i, count = ipa_get_param_count (info); |
3661 | ||
155c9907 | 3662 | for (i = 0; i < count; i++) |
310bc633 | 3663 | { |
99b1c316 | 3664 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
c0cb5055 | 3665 | ipcp_lattice<tree> *lat = &plats->itself; |
2c9561b5 | 3666 | struct ipcp_agg_lattice *aglat; |
310bc633 | 3667 | |
2c9561b5 | 3668 | if (!lat->bottom) |
44210a96 MJ |
3669 | { |
3670 | ipcp_value<tree> *val; | |
3671 | for (val = lat->values; val; val = val->next) | |
3672 | topo->constants.add_val (val); | |
3673 | } | |
2c9561b5 MJ |
3674 | |
3675 | if (!plats->aggs_bottom) | |
3676 | for (aglat = plats->aggs; aglat; aglat = aglat->next) | |
3677 | if (!aglat->bottom) | |
44210a96 MJ |
3678 | { |
3679 | ipcp_value<tree> *val; | |
3680 | for (val = aglat->values; val; val = val->next) | |
3681 | topo->constants.add_val (val); | |
3682 | } | |
3683 | ||
3684 | ipcp_lattice<ipa_polymorphic_call_context> *ctxlat = &plats->ctxlat; | |
3685 | if (!ctxlat->bottom) | |
3686 | { | |
3687 | ipcp_value<ipa_polymorphic_call_context> *ctxval; | |
3688 | for (ctxval = ctxlat->values; ctxval; ctxval = ctxval->next) | |
3689 | topo->contexts.add_val (ctxval); | |
3690 | } | |
310bc633 | 3691 | } |
518dc859 RL |
3692 | } |
3693 | ||
310bc633 MJ |
3694 | /* One pass of constants propagation along the call graph edges, from callers |
3695 | to callees (requires topological ordering in TOPO), iterate over strongly | |
3696 | connected components. */ | |
3697 | ||
518dc859 | 3698 | static void |
99b1c316 | 3699 | propagate_constants_topo (class ipa_topo_info *topo) |
518dc859 | 3700 | { |
310bc633 | 3701 | int i; |
518dc859 | 3702 | |
310bc633 | 3703 | for (i = topo->nnodes - 1; i >= 0; i--) |
518dc859 | 3704 | { |
39e87baf | 3705 | unsigned j; |
310bc633 | 3706 | struct cgraph_node *v, *node = topo->order[i]; |
d52f5295 | 3707 | vec<cgraph_node *> cycle_nodes = ipa_get_nodes_in_cycle (node); |
310bc633 | 3708 | |
310bc633 MJ |
3709 | /* First, iteratively propagate within the strongly connected component |
3710 | until all lattices stabilize. */ | |
39e87baf | 3711 | FOR_EACH_VEC_ELT (cycle_nodes, j, v) |
d52f5295 | 3712 | if (v->has_gimple_body_p ()) |
6cf67b62 | 3713 | { |
e72763e2 JH |
3714 | if (opt_for_fn (v->decl, flag_ipa_cp) |
3715 | && opt_for_fn (v->decl, optimize)) | |
6cf67b62 | 3716 | push_node_to_stack (topo, v); |
223f4b10 | 3717 | /* When V is not optimized, we can not push it to stack, but |
6cf67b62 JH |
3718 | still we need to set all its callees lattices to bottom. */ |
3719 | else | |
3720 | { | |
3721 | for (cgraph_edge *cs = v->callees; cs; cs = cs->next_callee) | |
3722 | propagate_constants_across_call (cs); | |
3723 | } | |
3724 | } | |
310bc633 | 3725 | |
39e87baf | 3726 | v = pop_node_from_stack (topo); |
310bc633 MJ |
3727 | while (v) |
3728 | { | |
3729 | struct cgraph_edge *cs; | |
9b14fc33 FX |
3730 | class ipa_node_params *info = NULL; |
3731 | bool self_scc = true; | |
310bc633 MJ |
3732 | |
3733 | for (cs = v->callees; cs; cs = cs->next_callee) | |
af21714c MJ |
3734 | if (ipa_edge_within_scc (cs)) |
3735 | { | |
9b14fc33 FX |
3736 | cgraph_node *callee = cs->callee->function_symbol (); |
3737 | ||
3738 | if (v != callee) | |
3739 | self_scc = false; | |
3740 | ||
3741 | if (!info) | |
3742 | { | |
3743 | info = IPA_NODE_REF (v); | |
3744 | info->node_within_scc = true; | |
3745 | } | |
3746 | ||
155c9907 | 3747 | if (propagate_constants_across_call (cs)) |
9b14fc33 | 3748 | push_node_to_stack (topo, callee); |
af21714c | 3749 | } |
9b14fc33 FX |
3750 | |
3751 | if (info) | |
3752 | info->node_is_self_scc = self_scc; | |
3753 | ||
310bc633 MJ |
3754 | v = pop_node_from_stack (topo); |
3755 | } | |
3756 | ||
3757 | /* Afterwards, propagate along edges leading out of the SCC, calculates | |
3758 | the local effects of the discovered constants and all valid values to | |
3759 | their topological sort. */ | |
39e87baf | 3760 | FOR_EACH_VEC_ELT (cycle_nodes, j, v) |
6cf67b62 | 3761 | if (v->has_gimple_body_p () |
e72763e2 JH |
3762 | && opt_for_fn (v->decl, flag_ipa_cp) |
3763 | && opt_for_fn (v->decl, optimize)) | |
39e87baf MJ |
3764 | { |
3765 | struct cgraph_edge *cs; | |
310bc633 | 3766 | |
39e87baf | 3767 | estimate_local_effects (v); |
c0cb5055 | 3768 | add_all_node_vals_to_toposort (v, topo); |
39e87baf | 3769 | for (cs = v->callees; cs; cs = cs->next_callee) |
4cb13597 | 3770 | if (!ipa_edge_within_scc (cs)) |
155c9907 | 3771 | propagate_constants_across_call (cs); |
39e87baf MJ |
3772 | } |
3773 | cycle_nodes.release (); | |
518dc859 RL |
3774 | } |
3775 | } | |
3776 | ||
df0227c4 MJ |
3777 | |
3778 | /* Return the sum of A and B if none of them is bigger than INT_MAX/2, return | |
3779 | the bigger one if otherwise. */ | |
3780 | ||
3781 | static int | |
3782 | safe_add (int a, int b) | |
3783 | { | |
3784 | if (a > INT_MAX/2 || b > INT_MAX/2) | |
3785 | return a > b ? a : b; | |
3786 | else | |
3787 | return a + b; | |
3788 | } | |
3789 | ||
3790 | ||
310bc633 | 3791 | /* Propagate the estimated effects of individual values along the topological |
073a8998 | 3792 | from the dependent values to those they depend on. */ |
310bc633 | 3793 | |
c0cb5055 MJ |
3794 | template <typename valtype> |
3795 | void | |
3796 | value_topo_info<valtype>::propagate_effects () | |
518dc859 | 3797 | { |
c0cb5055 | 3798 | ipcp_value<valtype> *base; |
518dc859 | 3799 | |
310bc633 | 3800 | for (base = values_topo; base; base = base->topo_next) |
518dc859 | 3801 | { |
c0cb5055 MJ |
3802 | ipcp_value_source<valtype> *src; |
3803 | ipcp_value<valtype> *val; | |
310bc633 MJ |
3804 | int time = 0, size = 0; |
3805 | ||
3806 | for (val = base; val; val = val->scc_next) | |
3807 | { | |
df0227c4 MJ |
3808 | time = safe_add (time, |
3809 | val->local_time_benefit + val->prop_time_benefit); | |
3810 | size = safe_add (size, val->local_size_cost + val->prop_size_cost); | |
310bc633 MJ |
3811 | } |
3812 | ||
3813 | for (val = base; val; val = val->scc_next) | |
3814 | for (src = val->sources; src; src = src->next) | |
3815 | if (src->val | |
3dafb85c | 3816 | && src->cs->maybe_hot_p ()) |
310bc633 | 3817 | { |
df0227c4 MJ |
3818 | src->val->prop_time_benefit = safe_add (time, |
3819 | src->val->prop_time_benefit); | |
3820 | src->val->prop_size_cost = safe_add (size, | |
3821 | src->val->prop_size_cost); | |
310bc633 | 3822 | } |
518dc859 RL |
3823 | } |
3824 | } | |
3825 | ||
310bc633 | 3826 | |
44210a96 MJ |
3827 | /* Propagate constants, polymorphic contexts and their effects from the |
3828 | summaries interprocedurally. */ | |
310bc633 | 3829 | |
518dc859 | 3830 | static void |
99b1c316 | 3831 | ipcp_propagate_stage (class ipa_topo_info *topo) |
518dc859 RL |
3832 | { |
3833 | struct cgraph_node *node; | |
518dc859 | 3834 | |
310bc633 MJ |
3835 | if (dump_file) |
3836 | fprintf (dump_file, "\n Propagating constants:\n\n"); | |
3837 | ||
e7a74006 JH |
3838 | max_count = profile_count::uninitialized (); |
3839 | ||
310bc633 MJ |
3840 | FOR_EACH_DEFINED_FUNCTION (node) |
3841 | { | |
e72763e2 JH |
3842 | if (node->has_gimple_body_p () |
3843 | && opt_for_fn (node->decl, flag_ipa_cp) | |
3844 | && opt_for_fn (node->decl, optimize)) | |
310bc633 | 3845 | { |
6cf67b62 JH |
3846 | class ipa_node_params *info = IPA_NODE_REF (node); |
3847 | determine_versionability (node, info); | |
99b1c316 | 3848 | info->lattices = XCNEWVEC (class ipcp_param_lattices, |
310bc633 MJ |
3849 | ipa_get_param_count (info)); |
3850 | initialize_node_lattices (node); | |
3851 | } | |
f658ad30 | 3852 | ipa_size_summary *s = ipa_size_summaries->get (node); |
56f62793 ML |
3853 | if (node->definition && !node->alias && s != NULL) |
3854 | overall_size += s->self_size; | |
1bad9c18 | 3855 | max_count = max_count.max (node->count.ipa ()); |
310bc633 MJ |
3856 | } |
3857 | ||
3858 | max_new_size = overall_size; | |
028d4092 ML |
3859 | if (max_new_size < param_large_unit_insns) |
3860 | max_new_size = param_large_unit_insns; | |
3861 | max_new_size += max_new_size * param_ipcp_unit_growth / 100 + 1; | |
310bc633 MJ |
3862 | |
3863 | if (dump_file) | |
3864 | fprintf (dump_file, "\noverall_size: %li, max_new_size: %li\n", | |
3865 | overall_size, max_new_size); | |
3866 | ||
3867 | propagate_constants_topo (topo); | |
b2b29377 MM |
3868 | if (flag_checking) |
3869 | ipcp_verify_propagated_values (); | |
c0cb5055 | 3870 | topo->constants.propagate_effects (); |
44210a96 | 3871 | topo->contexts.propagate_effects (); |
310bc633 MJ |
3872 | |
3873 | if (dump_file) | |
3874 | { | |
3875 | fprintf (dump_file, "\nIPA lattices after all propagation:\n"); | |
3876 | print_all_lattices (dump_file, (dump_flags & TDF_DETAILS), true); | |
3877 | } | |
3878 | } | |
3879 | ||
3880 | /* Discover newly direct outgoing edges from NODE which is a new clone with | |
44210a96 | 3881 | known KNOWN_CSTS and make them direct. */ |
310bc633 MJ |
3882 | |
3883 | static void | |
3884 | ipcp_discover_new_direct_edges (struct cgraph_node *node, | |
44210a96 MJ |
3885 | vec<tree> known_csts, |
3886 | vec<ipa_polymorphic_call_context> | |
3887 | known_contexts, | |
162712de | 3888 | struct ipa_agg_replacement_value *aggvals) |
310bc633 MJ |
3889 | { |
3890 | struct cgraph_edge *ie, *next_ie; | |
0f378cb5 | 3891 | bool found = false; |
310bc633 MJ |
3892 | |
3893 | for (ie = node->indirect_calls; ie; ie = next_ie) | |
3894 | { | |
81fa35bd | 3895 | tree target; |
231b4916 | 3896 | bool speculative; |
310bc633 MJ |
3897 | |
3898 | next_ie = ie->next_callee; | |
44210a96 | 3899 | target = ipa_get_indirect_edge_target_1 (ie, known_csts, known_contexts, |
231b4916 | 3900 | vNULL, aggvals, &speculative); |
310bc633 | 3901 | if (target) |
0f378cb5 | 3902 | { |
042ae7d2 JH |
3903 | bool agg_contents = ie->indirect_info->agg_contents; |
3904 | bool polymorphic = ie->indirect_info->polymorphic; | |
a4e33812 | 3905 | int param_index = ie->indirect_info->param_index; |
231b4916 JH |
3906 | struct cgraph_edge *cs = ipa_make_edge_direct_to_target (ie, target, |
3907 | speculative); | |
0f378cb5 | 3908 | found = true; |
4502fe8d | 3909 | |
042ae7d2 | 3910 | if (cs && !agg_contents && !polymorphic) |
4502fe8d | 3911 | { |
99b1c316 | 3912 | class ipa_node_params *info = IPA_NODE_REF (node); |
4502fe8d MJ |
3913 | int c = ipa_get_controlled_uses (info, param_index); |
3914 | if (c != IPA_UNDESCRIBED_USE) | |
3915 | { | |
3916 | struct ipa_ref *to_del; | |
3917 | ||
3918 | c--; | |
3919 | ipa_set_controlled_uses (info, param_index, c); | |
3920 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
3921 | fprintf (dump_file, " controlled uses count of param " | |
3922 | "%i bumped down to %i\n", param_index, c); | |
3923 | if (c == 0 | |
d122681a | 3924 | && (to_del = node->find_reference (cs->callee, NULL, 0))) |
4502fe8d MJ |
3925 | { |
3926 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
3927 | fprintf (dump_file, " and even removing its " | |
3928 | "cloning-created reference\n"); | |
d122681a | 3929 | to_del->remove_reference (); |
4502fe8d MJ |
3930 | } |
3931 | } | |
3932 | } | |
0f378cb5 | 3933 | } |
310bc633 | 3934 | } |
0f378cb5 JH |
3935 | /* Turning calls to direct calls will improve overall summary. */ |
3936 | if (found) | |
0bceb671 | 3937 | ipa_update_overall_fn_summary (node); |
310bc633 MJ |
3938 | } |
3939 | ||
1ac2bdb4 ML |
3940 | class edge_clone_summary; |
3941 | static call_summary <edge_clone_summary *> *edge_clone_summaries = NULL; | |
310bc633 | 3942 | |
1ac2bdb4 | 3943 | /* Edge clone summary. */ |
310bc633 | 3944 | |
6c1dae73 | 3945 | class edge_clone_summary |
310bc633 | 3946 | { |
6c1dae73 | 3947 | public: |
1ac2bdb4 ML |
3948 | /* Default constructor. */ |
3949 | edge_clone_summary (): prev_clone (NULL), next_clone (NULL) {} | |
aef83682 | 3950 | |
1ac2bdb4 ML |
3951 | /* Default destructor. */ |
3952 | ~edge_clone_summary () | |
3953 | { | |
3954 | if (prev_clone) | |
3955 | edge_clone_summaries->get (prev_clone)->next_clone = next_clone; | |
3956 | if (next_clone) | |
3957 | edge_clone_summaries->get (next_clone)->prev_clone = prev_clone; | |
3958 | } | |
310bc633 | 3959 | |
1ac2bdb4 ML |
3960 | cgraph_edge *prev_clone; |
3961 | cgraph_edge *next_clone; | |
3962 | }; | |
aef83682 | 3963 | |
1ac2bdb4 ML |
3964 | class edge_clone_summary_t: |
3965 | public call_summary <edge_clone_summary *> | |
aef83682 | 3966 | { |
1ac2bdb4 ML |
3967 | public: |
3968 | edge_clone_summary_t (symbol_table *symtab): | |
3969 | call_summary <edge_clone_summary *> (symtab) | |
3970 | { | |
3971 | m_initialize_when_cloning = true; | |
3972 | } | |
aef83682 | 3973 | |
1ac2bdb4 ML |
3974 | virtual void duplicate (cgraph_edge *src_edge, cgraph_edge *dst_edge, |
3975 | edge_clone_summary *src_data, | |
3976 | edge_clone_summary *dst_data); | |
3977 | }; | |
3978 | ||
3979 | /* Edge duplication hook. */ | |
3980 | ||
3981 | void | |
3982 | edge_clone_summary_t::duplicate (cgraph_edge *src_edge, cgraph_edge *dst_edge, | |
3983 | edge_clone_summary *src_data, | |
3984 | edge_clone_summary *dst_data) | |
3985 | { | |
3986 | if (src_data->next_clone) | |
3987 | edge_clone_summaries->get (src_data->next_clone)->prev_clone = dst_edge; | |
3988 | dst_data->prev_clone = src_edge; | |
3989 | dst_data->next_clone = src_data->next_clone; | |
3990 | src_data->next_clone = dst_edge; | |
aef83682 MJ |
3991 | } |
3992 | ||
47f4756e | 3993 | /* Return true is NODE is DEST or its clone for all contexts. */ |
310bc633 MJ |
3994 | |
3995 | static bool | |
47f4756e MJ |
3996 | same_node_or_its_all_contexts_clone_p (cgraph_node *node, cgraph_node *dest) |
3997 | { | |
3998 | if (node == dest) | |
3999 | return true; | |
4000 | ||
99b1c316 | 4001 | class ipa_node_params *info = IPA_NODE_REF (node); |
47f4756e MJ |
4002 | return info->is_all_contexts_clone && info->ipcp_orig_node == dest; |
4003 | } | |
4004 | ||
7b668576 MJ |
4005 | /* Return true if edge CS does bring about the value described by SRC to |
4006 | DEST_VAL of node DEST or its clone for all contexts. */ | |
47f4756e MJ |
4007 | |
4008 | static bool | |
4009 | cgraph_edge_brings_value_p (cgraph_edge *cs, ipcp_value_source<tree> *src, | |
7b668576 | 4010 | cgraph_node *dest, ipcp_value<tree> *dest_val) |
310bc633 | 4011 | { |
99b1c316 | 4012 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
47f4756e MJ |
4013 | enum availability availability; |
4014 | cgraph_node *real_dest = cs->callee->function_symbol (&availability); | |
310bc633 | 4015 | |
6cf67b62 JH |
4016 | if (availability <= AVAIL_INTERPOSABLE |
4017 | || !same_node_or_its_all_contexts_clone_p (real_dest, dest) | |
310bc633 MJ |
4018 | || caller_info->node_dead) |
4019 | return false; | |
2f1f3ac4 MJ |
4020 | |
4021 | if (!src->val) | |
310bc633 MJ |
4022 | return true; |
4023 | ||
4024 | if (caller_info->ipcp_orig_node) | |
4025 | { | |
2c9561b5 MJ |
4026 | tree t; |
4027 | if (src->offset == -1) | |
44210a96 | 4028 | t = caller_info->known_csts[src->index]; |
2c9561b5 MJ |
4029 | else |
4030 | t = get_clone_agg_value (cs->caller, src->offset, src->index); | |
310bc633 MJ |
4031 | return (t != NULL_TREE |
4032 | && values_equal_for_ipcp_p (src->val->value, t)); | |
4033 | } | |
4034 | else | |
518dc859 | 4035 | { |
2f1f3ac4 MJ |
4036 | /* At the moment we do not propagate over arithmetic jump functions in |
4037 | SCCs, so it is safe to detect self-feeding recursive calls in this | |
4038 | way. */ | |
4039 | if (src->val == dest_val) | |
4040 | return true; | |
4041 | ||
2c9561b5 | 4042 | struct ipcp_agg_lattice *aglat; |
99b1c316 | 4043 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (caller_info, |
2c9561b5 MJ |
4044 | src->index); |
4045 | if (src->offset == -1) | |
c0cb5055 | 4046 | return (plats->itself.is_single_const () |
2c9561b5 MJ |
4047 | && values_equal_for_ipcp_p (src->val->value, |
4048 | plats->itself.values->value)); | |
310bc633 | 4049 | else |
2c9561b5 MJ |
4050 | { |
4051 | if (plats->aggs_bottom || plats->aggs_contain_variable) | |
4052 | return false; | |
4053 | for (aglat = plats->aggs; aglat; aglat = aglat->next) | |
4054 | if (aglat->offset == src->offset) | |
c0cb5055 | 4055 | return (aglat->is_single_const () |
2c9561b5 MJ |
4056 | && values_equal_for_ipcp_p (src->val->value, |
4057 | aglat->values->value)); | |
4058 | } | |
4059 | return false; | |
310bc633 MJ |
4060 | } |
4061 | } | |
4062 | ||
7b668576 MJ |
4063 | /* Return true if edge CS does bring about the value described by SRC to |
4064 | DST_VAL of node DEST or its clone for all contexts. */ | |
44210a96 MJ |
4065 | |
4066 | static bool | |
47f4756e MJ |
4067 | cgraph_edge_brings_value_p (cgraph_edge *cs, |
4068 | ipcp_value_source<ipa_polymorphic_call_context> *src, | |
7b668576 MJ |
4069 | cgraph_node *dest, |
4070 | ipcp_value<ipa_polymorphic_call_context> *) | |
44210a96 | 4071 | { |
99b1c316 | 4072 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
6cf67b62 JH |
4073 | enum availability avail; |
4074 | cgraph_node *real_dest = cs->callee->function_symbol (&avail); | |
44210a96 | 4075 | |
6cf67b62 JH |
4076 | if (avail <= AVAIL_INTERPOSABLE |
4077 | || !same_node_or_its_all_contexts_clone_p (real_dest, dest) | |
44210a96 MJ |
4078 | || caller_info->node_dead) |
4079 | return false; | |
4080 | if (!src->val) | |
4081 | return true; | |
4082 | ||
4083 | if (caller_info->ipcp_orig_node) | |
4084 | return (caller_info->known_contexts.length () > (unsigned) src->index) | |
4085 | && values_equal_for_ipcp_p (src->val->value, | |
4086 | caller_info->known_contexts[src->index]); | |
4087 | ||
99b1c316 | 4088 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (caller_info, |
44210a96 MJ |
4089 | src->index); |
4090 | return plats->ctxlat.is_single_const () | |
4091 | && values_equal_for_ipcp_p (src->val->value, | |
4092 | plats->ctxlat.values->value); | |
4093 | } | |
4094 | ||
2c9561b5 MJ |
4095 | /* Get the next clone in the linked list of clones of an edge. */ |
4096 | ||
4097 | static inline struct cgraph_edge * | |
4098 | get_next_cgraph_edge_clone (struct cgraph_edge *cs) | |
4099 | { | |
1ac2bdb4 ML |
4100 | edge_clone_summary *s = edge_clone_summaries->get (cs); |
4101 | return s != NULL ? s->next_clone : NULL; | |
2c9561b5 MJ |
4102 | } |
4103 | ||
7b668576 MJ |
4104 | /* Given VAL that is intended for DEST, iterate over all its sources and if any |
4105 | of them is viable and hot, return true. In that case, for those that still | |
4106 | hold, add their edge frequency and their number into *FREQUENCY and | |
4107 | *CALLER_COUNT respectively. */ | |
310bc633 | 4108 | |
c0cb5055 | 4109 | template <typename valtype> |
310bc633 | 4110 | static bool |
47f4756e MJ |
4111 | get_info_about_necessary_edges (ipcp_value<valtype> *val, cgraph_node *dest, |
4112 | int *freq_sum, | |
3995f3a2 | 4113 | profile_count *count_sum, int *caller_count) |
310bc633 | 4114 | { |
c0cb5055 | 4115 | ipcp_value_source<valtype> *src; |
310bc633 | 4116 | int freq = 0, count = 0; |
3995f3a2 | 4117 | profile_count cnt = profile_count::zero (); |
310bc633 | 4118 | bool hot = false; |
7b668576 | 4119 | bool non_self_recursive = false; |
310bc633 MJ |
4120 | |
4121 | for (src = val->sources; src; src = src->next) | |
4122 | { | |
4123 | struct cgraph_edge *cs = src->cs; | |
4124 | while (cs) | |
518dc859 | 4125 | { |
7b668576 | 4126 | if (cgraph_edge_brings_value_p (cs, src, dest, val)) |
310bc633 MJ |
4127 | { |
4128 | count++; | |
1bad9c18 JH |
4129 | freq += cs->frequency (); |
4130 | if (cs->count.ipa ().initialized_p ()) | |
4131 | cnt += cs->count.ipa (); | |
3dafb85c | 4132 | hot |= cs->maybe_hot_p (); |
7b668576 MJ |
4133 | if (cs->caller != dest) |
4134 | non_self_recursive = true; | |
9b14fc33 FX |
4135 | else if (src->val) |
4136 | gcc_assert (values_equal_for_ipcp_p (src->val->value, | |
4137 | val->value)); | |
310bc633 MJ |
4138 | } |
4139 | cs = get_next_cgraph_edge_clone (cs); | |
518dc859 RL |
4140 | } |
4141 | } | |
310bc633 | 4142 | |
7b668576 MJ |
4143 | /* If the only edges bringing a value are self-recursive ones, do not bother |
4144 | evaluating it. */ | |
4145 | if (!non_self_recursive) | |
4146 | return false; | |
4147 | ||
310bc633 MJ |
4148 | *freq_sum = freq; |
4149 | *count_sum = cnt; | |
4150 | *caller_count = count; | |
9b14fc33 FX |
4151 | |
4152 | if (!hot && IPA_NODE_REF (dest)->node_within_scc) | |
4153 | { | |
4154 | struct cgraph_edge *cs; | |
4155 | ||
4156 | /* Cold non-SCC source edge could trigger hot recursive execution of | |
4157 | function. Consider the case as hot and rely on following cost model | |
4158 | computation to further select right one. */ | |
4159 | for (cs = dest->callers; cs; cs = cs->next_caller) | |
4160 | if (cs->caller == dest && cs->maybe_hot_p ()) | |
4161 | return true; | |
4162 | } | |
4163 | ||
310bc633 | 4164 | return hot; |
518dc859 RL |
4165 | } |
4166 | ||
47f4756e MJ |
4167 | /* Return a vector of incoming edges that do bring value VAL to node DEST. It |
4168 | is assumed their number is known and equal to CALLER_COUNT. */ | |
310bc633 | 4169 | |
c0cb5055 | 4170 | template <typename valtype> |
d52f5295 | 4171 | static vec<cgraph_edge *> |
47f4756e MJ |
4172 | gather_edges_for_value (ipcp_value<valtype> *val, cgraph_node *dest, |
4173 | int caller_count) | |
518dc859 | 4174 | { |
c0cb5055 | 4175 | ipcp_value_source<valtype> *src; |
d52f5295 | 4176 | vec<cgraph_edge *> ret; |
310bc633 | 4177 | |
9771b263 | 4178 | ret.create (caller_count); |
310bc633 MJ |
4179 | for (src = val->sources; src; src = src->next) |
4180 | { | |
4181 | struct cgraph_edge *cs = src->cs; | |
4182 | while (cs) | |
4183 | { | |
7b668576 | 4184 | if (cgraph_edge_brings_value_p (cs, src, dest, val)) |
9771b263 | 4185 | ret.quick_push (cs); |
310bc633 MJ |
4186 | cs = get_next_cgraph_edge_clone (cs); |
4187 | } | |
4188 | } | |
4189 | ||
4190 | return ret; | |
518dc859 RL |
4191 | } |
4192 | ||
310bc633 MJ |
4193 | /* Construct a replacement map for a know VALUE for a formal parameter PARAM. |
4194 | Return it or NULL if for some reason it cannot be created. */ | |
4195 | ||
518dc859 | 4196 | static struct ipa_replace_map * |
99b1c316 | 4197 | get_replacement_map (class ipa_node_params *info, tree value, int parm_num) |
518dc859 RL |
4198 | { |
4199 | struct ipa_replace_map *replace_map; | |
518dc859 | 4200 | |
310bc633 | 4201 | |
766090c2 | 4202 | replace_map = ggc_alloc<ipa_replace_map> (); |
c6f7cfc1 JH |
4203 | if (dump_file) |
4204 | { | |
0e8853ee JH |
4205 | fprintf (dump_file, " replacing "); |
4206 | ipa_dump_param (dump_file, info, parm_num); | |
155c9907 | 4207 | |
c6f7cfc1 | 4208 | fprintf (dump_file, " with const "); |
ef6cb4c7 | 4209 | print_generic_expr (dump_file, value); |
c6f7cfc1 JH |
4210 | fprintf (dump_file, "\n"); |
4211 | } | |
49bde175 | 4212 | replace_map->parm_num = parm_num; |
310bc633 | 4213 | replace_map->new_tree = value; |
518dc859 RL |
4214 | return replace_map; |
4215 | } | |
4216 | ||
310bc633 | 4217 | /* Dump new profiling counts */ |
518dc859 | 4218 | |
518dc859 | 4219 | static void |
310bc633 MJ |
4220 | dump_profile_updates (struct cgraph_node *orig_node, |
4221 | struct cgraph_node *new_node) | |
518dc859 | 4222 | { |
310bc633 | 4223 | struct cgraph_edge *cs; |
518dc859 | 4224 | |
3995f3a2 JH |
4225 | fprintf (dump_file, " setting count of the specialized node to "); |
4226 | new_node->count.dump (dump_file); | |
4227 | fprintf (dump_file, "\n"); | |
155c9907 | 4228 | for (cs = new_node->callees; cs; cs = cs->next_callee) |
3995f3a2 JH |
4229 | { |
4230 | fprintf (dump_file, " edge to %s has count ", | |
4231 | cs->callee->name ()); | |
4232 | cs->count.dump (dump_file); | |
4233 | fprintf (dump_file, "\n"); | |
4234 | } | |
310bc633 | 4235 | |
3995f3a2 JH |
4236 | fprintf (dump_file, " setting count of the original node to "); |
4237 | orig_node->count.dump (dump_file); | |
4238 | fprintf (dump_file, "\n"); | |
155c9907 | 4239 | for (cs = orig_node->callees; cs; cs = cs->next_callee) |
3995f3a2 JH |
4240 | { |
4241 | fprintf (dump_file, " edge to %s is left with ", | |
4242 | cs->callee->name ()); | |
4243 | cs->count.dump (dump_file); | |
4244 | fprintf (dump_file, "\n"); | |
4245 | } | |
310bc633 | 4246 | } |
c6f7cfc1 | 4247 | |
310bc633 MJ |
4248 | /* After a specialized NEW_NODE version of ORIG_NODE has been created, update |
4249 | their profile information to reflect this. */ | |
518dc859 | 4250 | |
518dc859 | 4251 | static void |
310bc633 MJ |
4252 | update_profiling_info (struct cgraph_node *orig_node, |
4253 | struct cgraph_node *new_node) | |
518dc859 | 4254 | { |
518dc859 | 4255 | struct cgraph_edge *cs; |
310bc633 | 4256 | struct caller_statistics stats; |
3995f3a2 JH |
4257 | profile_count new_sum, orig_sum; |
4258 | profile_count remainder, orig_node_count = orig_node->count; | |
2e7fd867 | 4259 | profile_count orig_new_node_count = new_node->count; |
310bc633 | 4260 | |
1bad9c18 | 4261 | if (!(orig_node_count.ipa () > profile_count::zero ())) |
310bc633 | 4262 | return; |
518dc859 | 4263 | |
310bc633 | 4264 | init_caller_stats (&stats); |
d52f5295 ML |
4265 | orig_node->call_for_symbol_thunks_and_aliases (gather_caller_stats, &stats, |
4266 | false); | |
310bc633 MJ |
4267 | orig_sum = stats.count_sum; |
4268 | init_caller_stats (&stats); | |
d52f5295 ML |
4269 | new_node->call_for_symbol_thunks_and_aliases (gather_caller_stats, &stats, |
4270 | false); | |
310bc633 MJ |
4271 | new_sum = stats.count_sum; |
4272 | ||
4273 | if (orig_node_count < orig_sum + new_sum) | |
518dc859 | 4274 | { |
310bc633 | 4275 | if (dump_file) |
3995f3a2 JH |
4276 | { |
4277 | fprintf (dump_file, " Problem: node %s has too low count ", | |
4278 | orig_node->dump_name ()); | |
4279 | orig_node_count.dump (dump_file); | |
4280 | fprintf (dump_file, "while the sum of incoming count is "); | |
4281 | (orig_sum + new_sum).dump (dump_file); | |
4282 | fprintf (dump_file, "\n"); | |
4283 | } | |
4284 | ||
4285 | orig_node_count = (orig_sum + new_sum).apply_scale (12, 10); | |
310bc633 | 4286 | if (dump_file) |
3995f3a2 JH |
4287 | { |
4288 | fprintf (dump_file, " proceeding by pretending it was "); | |
4289 | orig_node_count.dump (dump_file); | |
4290 | fprintf (dump_file, "\n"); | |
4291 | } | |
518dc859 | 4292 | } |
310bc633 | 4293 | |
517048ce JH |
4294 | remainder = orig_node_count.combine_with_ipa_count (orig_node_count.ipa () |
4295 | - new_sum.ipa ()); | |
34fbe3f0 JH |
4296 | |
4297 | /* With partial train run we do not want to assume that original's | |
4298 | count is zero whenever we redurect all executed edges to clone. | |
4299 | Simply drop profile to local one in this case. */ | |
4300 | if (remainder.ipa_p () && !remainder.ipa ().nonzero_p () | |
4301 | && orig_node->count.ipa_p () && orig_node->count.ipa ().nonzero_p () | |
4302 | && flag_profile_partial_training) | |
4303 | remainder = remainder.guessed_local (); | |
4304 | ||
517048ce | 4305 | new_sum = orig_node_count.combine_with_ipa_count (new_sum); |
2e7fd867 | 4306 | new_node->count = new_sum; |
310bc633 MJ |
4307 | orig_node->count = remainder; |
4308 | ||
2e7fd867 | 4309 | profile_count::adjust_for_ipa_scaling (&new_sum, &orig_new_node_count); |
155c9907 | 4310 | for (cs = new_node->callees; cs; cs = cs->next_callee) |
2e7fd867 JH |
4311 | cs->count = cs->count.apply_scale (new_sum, orig_new_node_count); |
4312 | for (cs = new_node->indirect_calls; cs; cs = cs->next_callee) | |
4313 | cs->count = cs->count.apply_scale (new_sum, orig_new_node_count); | |
310bc633 | 4314 | |
5a686851 | 4315 | profile_count::adjust_for_ipa_scaling (&remainder, &orig_node_count); |
155c9907 | 4316 | for (cs = orig_node->callees; cs; cs = cs->next_callee) |
3995f3a2 | 4317 | cs->count = cs->count.apply_scale (remainder, orig_node_count); |
2e7fd867 JH |
4318 | for (cs = orig_node->indirect_calls; cs; cs = cs->next_callee) |
4319 | cs->count = cs->count.apply_scale (remainder, orig_node_count); | |
310bc633 MJ |
4320 | |
4321 | if (dump_file) | |
4322 | dump_profile_updates (orig_node, new_node); | |
518dc859 RL |
4323 | } |
4324 | ||
310bc633 MJ |
4325 | /* Update the respective profile of specialized NEW_NODE and the original |
4326 | ORIG_NODE after additional edges with cumulative count sum REDIRECTED_SUM | |
4327 | have been redirected to the specialized version. */ | |
4328 | ||
4329 | static void | |
4330 | update_specialized_profile (struct cgraph_node *new_node, | |
4331 | struct cgraph_node *orig_node, | |
3995f3a2 | 4332 | profile_count redirected_sum) |
5e45130d | 4333 | { |
a065d52e | 4334 | struct cgraph_edge *cs; |
3995f3a2 | 4335 | profile_count new_node_count, orig_node_count = orig_node->count; |
5e45130d | 4336 | |
310bc633 | 4337 | if (dump_file) |
3995f3a2 JH |
4338 | { |
4339 | fprintf (dump_file, " the sum of counts of redirected edges is "); | |
4340 | redirected_sum.dump (dump_file); | |
4341 | fprintf (dump_file, "\n"); | |
4342 | } | |
4343 | if (!(orig_node_count > profile_count::zero ())) | |
310bc633 | 4344 | return; |
a065d52e | 4345 | |
310bc633 | 4346 | gcc_assert (orig_node_count >= redirected_sum); |
5e45130d | 4347 | |
310bc633 MJ |
4348 | new_node_count = new_node->count; |
4349 | new_node->count += redirected_sum; | |
4350 | orig_node->count -= redirected_sum; | |
a065d52e | 4351 | |
155c9907 | 4352 | for (cs = new_node->callees; cs; cs = cs->next_callee) |
e3951b03 | 4353 | cs->count += cs->count.apply_scale (redirected_sum, new_node_count); |
a065d52e | 4354 | |
155c9907 | 4355 | for (cs = orig_node->callees; cs; cs = cs->next_callee) |
310bc633 | 4356 | { |
3995f3a2 JH |
4357 | profile_count dec = cs->count.apply_scale (redirected_sum, |
4358 | orig_node_count); | |
4359 | cs->count -= dec; | |
310bc633 | 4360 | } |
a065d52e | 4361 | |
310bc633 MJ |
4362 | if (dump_file) |
4363 | dump_profile_updates (orig_node, new_node); | |
5e45130d JH |
4364 | } |
4365 | ||
ff6686d2 MJ |
4366 | /* Return true if we would like to remove a parameter from NODE when cloning it |
4367 | with KNOWN_CSTS scalar constants. */ | |
4368 | ||
4369 | static bool | |
4370 | want_remove_some_param_p (cgraph_node *node, vec<tree> known_csts) | |
4371 | { | |
4372 | auto_vec<bool, 16> surviving; | |
4373 | bool filled_vec = false; | |
4374 | ipa_node_params *info = IPA_NODE_REF (node); | |
4375 | int i, count = ipa_get_param_count (info); | |
4376 | ||
4377 | for (i = 0; i < count; i++) | |
4378 | { | |
4379 | if (!known_csts[i] && ipa_is_param_used (info, i)) | |
4380 | continue; | |
4381 | ||
4382 | if (!filled_vec) | |
4383 | { | |
4384 | if (!node->clone.param_adjustments) | |
4385 | return true; | |
4386 | node->clone.param_adjustments->get_surviving_params (&surviving); | |
4387 | filled_vec = true; | |
4388 | } | |
4389 | if (surviving.length() < (unsigned) i && surviving[i]) | |
4390 | return true; | |
4391 | } | |
4392 | return false; | |
4393 | } | |
4394 | ||
44210a96 MJ |
4395 | /* Create a specialized version of NODE with known constants in KNOWN_CSTS, |
4396 | known contexts in KNOWN_CONTEXTS and known aggregate values in AGGVALS and | |
4397 | redirect all edges in CALLERS to it. */ | |
a065d52e | 4398 | |
310bc633 MJ |
4399 | static struct cgraph_node * |
4400 | create_specialized_node (struct cgraph_node *node, | |
44210a96 MJ |
4401 | vec<tree> known_csts, |
4402 | vec<ipa_polymorphic_call_context> known_contexts, | |
2c9561b5 | 4403 | struct ipa_agg_replacement_value *aggvals, |
d52f5295 | 4404 | vec<cgraph_edge *> callers) |
5e45130d | 4405 | { |
99b1c316 | 4406 | class ipa_node_params *new_info, *info = IPA_NODE_REF (node); |
d52f5295 | 4407 | vec<ipa_replace_map *, va_gc> *replace_trees = NULL; |
ff6686d2 | 4408 | vec<ipa_adjusted_param, va_gc> *new_params = NULL; |
79ee9826 | 4409 | struct ipa_agg_replacement_value *av; |
310bc633 MJ |
4410 | struct cgraph_node *new_node; |
4411 | int i, count = ipa_get_param_count (info); | |
ff6686d2 MJ |
4412 | ipa_param_adjustments *old_adjustments = node->clone.param_adjustments; |
4413 | ipa_param_adjustments *new_adjustments; | |
310bc633 | 4414 | gcc_assert (!info->ipcp_orig_node); |
87f94429 | 4415 | gcc_assert (node->can_change_signature |
ff6686d2 MJ |
4416 | || !old_adjustments); |
4417 | ||
4418 | if (old_adjustments) | |
4419 | { | |
4420 | /* At the moment all IPA optimizations should use the number of | |
4421 | parameters of the prevailing decl as the m_always_copy_start. | |
4422 | Handling any other value would complicate the code below, so for the | |
4423 | time bing let's only assert it is so. */ | |
4424 | gcc_assert (old_adjustments->m_always_copy_start == count | |
4425 | || old_adjustments->m_always_copy_start < 0); | |
4426 | int old_adj_count = vec_safe_length (old_adjustments->m_adj_params); | |
4427 | for (i = 0; i < old_adj_count; i++) | |
310bc633 | 4428 | { |
ff6686d2 | 4429 | ipa_adjusted_param *old_adj = &(*old_adjustments->m_adj_params)[i]; |
87f94429 | 4430 | if (!node->can_change_signature |
ff6686d2 MJ |
4431 | || old_adj->op != IPA_PARAM_OP_COPY |
4432 | || (!known_csts[old_adj->base_index] | |
4433 | && ipa_is_param_used (info, old_adj->base_index))) | |
4434 | { | |
4435 | ipa_adjusted_param new_adj = *old_adj; | |
310bc633 | 4436 | |
ff6686d2 MJ |
4437 | new_adj.prev_clone_adjustment = true; |
4438 | new_adj.prev_clone_index = i; | |
4439 | vec_safe_push (new_params, new_adj); | |
4440 | } | |
310bc633 | 4441 | } |
ff6686d2 MJ |
4442 | bool skip_return = old_adjustments->m_skip_return; |
4443 | new_adjustments = (new (ggc_alloc <ipa_param_adjustments> ()) | |
4444 | ipa_param_adjustments (new_params, count, | |
4445 | skip_return)); | |
310bc633 | 4446 | } |
87f94429 | 4447 | else if (node->can_change_signature |
ff6686d2 | 4448 | && want_remove_some_param_p (node, known_csts)) |
d7da5cc8 | 4449 | { |
ff6686d2 MJ |
4450 | ipa_adjusted_param adj; |
4451 | memset (&adj, 0, sizeof (adj)); | |
4452 | adj.op = IPA_PARAM_OP_COPY; | |
4453 | for (i = 0; i < count; i++) | |
4454 | if (!known_csts[i] && ipa_is_param_used (info, i)) | |
4455 | { | |
4456 | adj.base_index = i; | |
4457 | adj.prev_clone_index = i; | |
4458 | vec_safe_push (new_params, adj); | |
4459 | } | |
4460 | new_adjustments = (new (ggc_alloc <ipa_param_adjustments> ()) | |
4461 | ipa_param_adjustments (new_params, count, false)); | |
d7da5cc8 | 4462 | } |
ff6686d2 MJ |
4463 | else |
4464 | new_adjustments = NULL; | |
310bc633 | 4465 | |
ff6686d2 | 4466 | replace_trees = vec_safe_copy (node->clone.tree_map); |
155c9907 | 4467 | for (i = 0; i < count; i++) |
310bc633 | 4468 | { |
44210a96 MJ |
4469 | tree t = known_csts[i]; |
4470 | if (t) | |
310bc633 MJ |
4471 | { |
4472 | struct ipa_replace_map *replace_map; | |
4473 | ||
44210a96 | 4474 | gcc_checking_assert (TREE_CODE (t) != TREE_BINFO); |
0e8853ee | 4475 | replace_map = get_replacement_map (info, t, i); |
310bc633 | 4476 | if (replace_map) |
9771b263 | 4477 | vec_safe_push (replace_trees, replace_map); |
310bc633 | 4478 | } |
5e45130d | 4479 | } |
7b668576 MJ |
4480 | auto_vec<cgraph_edge *, 2> self_recursive_calls; |
4481 | for (i = callers.length () - 1; i >= 0; i--) | |
4482 | { | |
4483 | cgraph_edge *cs = callers[i]; | |
4484 | if (cs->caller == node) | |
4485 | { | |
4486 | self_recursive_calls.safe_push (cs); | |
4487 | callers.unordered_remove (i); | |
4488 | } | |
4489 | } | |
5e45130d | 4490 | |
9e0b0ec3 MP |
4491 | unsigned &suffix_counter = clone_num_suffixes->get_or_insert ( |
4492 | IDENTIFIER_POINTER (DECL_ASSEMBLER_NAME ( | |
4493 | node->decl))); | |
d52f5295 | 4494 | new_node = node->create_virtual_clone (callers, replace_trees, |
ff6686d2 | 4495 | new_adjustments, "constprop", |
53aedcce MP |
4496 | suffix_counter); |
4497 | suffix_counter++; | |
7b668576 | 4498 | |
5bf31c64 | 4499 | bool have_self_recursive_calls = !self_recursive_calls.is_empty (); |
7b668576 MJ |
4500 | for (unsigned j = 0; j < self_recursive_calls.length (); j++) |
4501 | { | |
1ac2bdb4 | 4502 | cgraph_edge *cs = get_next_cgraph_edge_clone (self_recursive_calls[j]); |
5fc1b920 MJ |
4503 | /* Cloned edges can disappear during cloning as speculation can be |
4504 | resolved, check that we have one and that it comes from the last | |
4505 | cloning. */ | |
4506 | if (cs && cs->caller == new_node) | |
4507 | cs->redirect_callee_duplicating_thunks (new_node); | |
4508 | /* Any future code that would make more than one clone of an outgoing | |
4509 | edge would confuse this mechanism, so let's check that does not | |
4510 | happen. */ | |
4511 | gcc_checking_assert (!cs | |
1ac2bdb4 ML |
4512 | || !get_next_cgraph_edge_clone (cs) |
4513 | || get_next_cgraph_edge_clone (cs)->caller != new_node); | |
7b668576 | 4514 | } |
5bf31c64 MJ |
4515 | if (have_self_recursive_calls) |
4516 | new_node->expand_all_artificial_thunks (); | |
7b668576 | 4517 | |
2c9561b5 | 4518 | ipa_set_node_agg_value_chain (new_node, aggvals); |
79ee9826 | 4519 | for (av = aggvals; av; av = av->next) |
2d8d3ae2 | 4520 | new_node->maybe_create_reference (av->value, NULL); |
79ee9826 | 4521 | |
310bc633 | 4522 | if (dump_file && (dump_flags & TDF_DETAILS)) |
2c9561b5 | 4523 | { |
464d0118 | 4524 | fprintf (dump_file, " the new node is %s.\n", new_node->dump_name ()); |
44210a96 MJ |
4525 | if (known_contexts.exists ()) |
4526 | { | |
155c9907 | 4527 | for (i = 0; i < count; i++) |
44210a96 MJ |
4528 | if (!known_contexts[i].useless_p ()) |
4529 | { | |
4530 | fprintf (dump_file, " known ctx %i is ", i); | |
4531 | known_contexts[i].dump (dump_file); | |
4532 | } | |
4533 | } | |
2c9561b5 MJ |
4534 | if (aggvals) |
4535 | ipa_dump_agg_replacement_values (dump_file, aggvals); | |
4536 | } | |
9de6f6c3 | 4537 | ipa_check_create_node_params (); |
310bc633 MJ |
4538 | update_profiling_info (node, new_node); |
4539 | new_info = IPA_NODE_REF (new_node); | |
4540 | new_info->ipcp_orig_node = node; | |
6cf67b62 | 4541 | new_node->ipcp_clone = true; |
44210a96 MJ |
4542 | new_info->known_csts = known_csts; |
4543 | new_info->known_contexts = known_contexts; | |
5e45130d | 4544 | |
44210a96 | 4545 | ipcp_discover_new_direct_edges (new_node, known_csts, known_contexts, aggvals); |
310bc633 | 4546 | |
9771b263 | 4547 | callers.release (); |
310bc633 | 4548 | return new_node; |
5e45130d JH |
4549 | } |
4550 | ||
7b668576 MJ |
4551 | /* Return true, if JFUNC, which describes a i-th parameter of call CS, is a |
4552 | simple no-operation pass-through function to itself. */ | |
4553 | ||
4554 | static bool | |
4555 | self_recursive_pass_through_p (cgraph_edge *cs, ipa_jump_func *jfunc, int i) | |
4556 | { | |
4557 | enum availability availability; | |
4558 | if (cs->caller == cs->callee->function_symbol (&availability) | |
4559 | && availability > AVAIL_INTERPOSABLE | |
4560 | && jfunc->type == IPA_JF_PASS_THROUGH | |
4561 | && ipa_get_jf_pass_through_operation (jfunc) == NOP_EXPR | |
4562 | && ipa_get_jf_pass_through_formal_id (jfunc) == i) | |
4563 | return true; | |
4564 | return false; | |
4565 | } | |
4566 | ||
951e27f5 FX |
4567 | /* Return true, if JFUNC, which describes a part of an aggregate represented |
4568 | or pointed to by the i-th parameter of call CS, is a simple no-operation | |
4569 | pass-through function to itself. */ | |
4570 | ||
4571 | static bool | |
4572 | self_recursive_agg_pass_through_p (cgraph_edge *cs, ipa_agg_jf_item *jfunc, | |
4573 | int i) | |
4574 | { | |
4575 | enum availability availability; | |
4576 | if (cs->caller == cs->callee->function_symbol (&availability) | |
4577 | && availability > AVAIL_INTERPOSABLE | |
4578 | && jfunc->jftype == IPA_JF_LOAD_AGG | |
4579 | && jfunc->offset == jfunc->value.load_agg.offset | |
4580 | && jfunc->value.pass_through.operation == NOP_EXPR | |
4581 | && jfunc->value.pass_through.formal_id == i) | |
4582 | return true; | |
4583 | return false; | |
4584 | } | |
4585 | ||
310bc633 | 4586 | /* Given a NODE, and a subset of its CALLERS, try to populate blanks slots in |
44210a96 | 4587 | KNOWN_CSTS with constants that are also known for all of the CALLERS. */ |
3949c4a7 MJ |
4588 | |
4589 | static void | |
2c9561b5 | 4590 | find_more_scalar_values_for_callers_subset (struct cgraph_node *node, |
44210a96 | 4591 | vec<tree> known_csts, |
d52f5295 | 4592 | vec<cgraph_edge *> callers) |
3949c4a7 | 4593 | { |
99b1c316 | 4594 | class ipa_node_params *info = IPA_NODE_REF (node); |
310bc633 | 4595 | int i, count = ipa_get_param_count (info); |
3949c4a7 | 4596 | |
155c9907 | 4597 | for (i = 0; i < count; i++) |
3949c4a7 | 4598 | { |
310bc633 MJ |
4599 | struct cgraph_edge *cs; |
4600 | tree newval = NULL_TREE; | |
4601 | int j; | |
df0d8136 | 4602 | bool first = true; |
e5cf5e11 | 4603 | tree type = ipa_get_type (info, i); |
3949c4a7 | 4604 | |
44210a96 | 4605 | if (ipa_get_scalar_lat (info, i)->bottom || known_csts[i]) |
3949c4a7 MJ |
4606 | continue; |
4607 | ||
9771b263 | 4608 | FOR_EACH_VEC_ELT (callers, j, cs) |
49c471e3 | 4609 | { |
310bc633 MJ |
4610 | struct ipa_jump_func *jump_func; |
4611 | tree t; | |
40591473 | 4612 | |
68188fff | 4613 | if (IPA_NODE_REF (cs->caller) && IPA_NODE_REF (cs->caller)->node_dead) |
7b668576 MJ |
4614 | continue; |
4615 | ||
a33c028e JH |
4616 | if (!IPA_EDGE_REF (cs) |
4617 | || i >= ipa_get_cs_argument_count (IPA_EDGE_REF (cs)) | |
173b7355 | 4618 | || (i == 0 |
31db0fe0 | 4619 | && call_passes_through_thunk_p (cs))) |
155c9907 JJ |
4620 | { |
4621 | newval = NULL_TREE; | |
4622 | break; | |
4623 | } | |
310bc633 | 4624 | jump_func = ipa_get_ith_jump_func (IPA_EDGE_REF (cs), i); |
7b668576 MJ |
4625 | if (self_recursive_pass_through_p (cs, jump_func, i)) |
4626 | continue; | |
4627 | ||
e5cf5e11 | 4628 | t = ipa_value_from_jfunc (IPA_NODE_REF (cs->caller), jump_func, type); |
310bc633 MJ |
4629 | if (!t |
4630 | || (newval | |
df0d8136 JH |
4631 | && !values_equal_for_ipcp_p (t, newval)) |
4632 | || (!first && !newval)) | |
3949c4a7 | 4633 | { |
310bc633 MJ |
4634 | newval = NULL_TREE; |
4635 | break; | |
3949c4a7 | 4636 | } |
310bc633 MJ |
4637 | else |
4638 | newval = t; | |
df0d8136 | 4639 | first = false; |
3949c4a7 MJ |
4640 | } |
4641 | ||
310bc633 MJ |
4642 | if (newval) |
4643 | { | |
4644 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
4645 | { | |
2c9561b5 | 4646 | fprintf (dump_file, " adding an extra known scalar value "); |
310bc633 | 4647 | print_ipcp_constant_value (dump_file, newval); |
0e8853ee JH |
4648 | fprintf (dump_file, " for "); |
4649 | ipa_dump_param (dump_file, info, i); | |
310bc633 MJ |
4650 | fprintf (dump_file, "\n"); |
4651 | } | |
5e45130d | 4652 | |
44210a96 | 4653 | known_csts[i] = newval; |
310bc633 | 4654 | } |
5e45130d | 4655 | } |
5e45130d JH |
4656 | } |
4657 | ||
44210a96 MJ |
4658 | /* Given a NODE and a subset of its CALLERS, try to populate plank slots in |
4659 | KNOWN_CONTEXTS with polymorphic contexts that are also known for all of the | |
4660 | CALLERS. */ | |
4661 | ||
4662 | static void | |
4663 | find_more_contexts_for_caller_subset (cgraph_node *node, | |
4664 | vec<ipa_polymorphic_call_context> | |
4665 | *known_contexts, | |
4666 | vec<cgraph_edge *> callers) | |
4667 | { | |
4668 | ipa_node_params *info = IPA_NODE_REF (node); | |
4669 | int i, count = ipa_get_param_count (info); | |
4670 | ||
155c9907 | 4671 | for (i = 0; i < count; i++) |
44210a96 MJ |
4672 | { |
4673 | cgraph_edge *cs; | |
4674 | ||
4675 | if (ipa_get_poly_ctx_lat (info, i)->bottom | |
4676 | || (known_contexts->exists () | |
4677 | && !(*known_contexts)[i].useless_p ())) | |
4678 | continue; | |
4679 | ||
4680 | ipa_polymorphic_call_context newval; | |
df0d8136 | 4681 | bool first = true; |
44210a96 MJ |
4682 | int j; |
4683 | ||
4684 | FOR_EACH_VEC_ELT (callers, j, cs) | |
4685 | { | |
a33c028e JH |
4686 | if (!IPA_EDGE_REF (cs) |
4687 | || i >= ipa_get_cs_argument_count (IPA_EDGE_REF (cs))) | |
44210a96 MJ |
4688 | return; |
4689 | ipa_jump_func *jfunc = ipa_get_ith_jump_func (IPA_EDGE_REF (cs), | |
4690 | i); | |
4691 | ipa_polymorphic_call_context ctx; | |
4692 | ctx = ipa_context_from_jfunc (IPA_NODE_REF (cs->caller), cs, i, | |
4693 | jfunc); | |
df0d8136 | 4694 | if (first) |
44210a96 | 4695 | { |
44210a96 | 4696 | newval = ctx; |
df0d8136 | 4697 | first = false; |
44210a96 | 4698 | } |
df0d8136 JH |
4699 | else |
4700 | newval.meet_with (ctx); | |
4701 | if (newval.useless_p ()) | |
4702 | break; | |
44210a96 MJ |
4703 | } |
4704 | ||
df0d8136 | 4705 | if (!newval.useless_p ()) |
44210a96 MJ |
4706 | { |
4707 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
4708 | { | |
4709 | fprintf (dump_file, " adding an extra known polymorphic " | |
4710 | "context "); | |
4711 | print_ipcp_constant_value (dump_file, newval); | |
4712 | fprintf (dump_file, " for "); | |
4713 | ipa_dump_param (dump_file, info, i); | |
4714 | fprintf (dump_file, "\n"); | |
4715 | } | |
4716 | ||
4717 | if (!known_contexts->exists ()) | |
4718 | known_contexts->safe_grow_cleared (ipa_get_param_count (info)); | |
4719 | (*known_contexts)[i] = newval; | |
4720 | } | |
4721 | ||
4722 | } | |
4723 | } | |
4724 | ||
2c9561b5 MJ |
4725 | /* Go through PLATS and create a vector of values consisting of values and |
4726 | offsets (minus OFFSET) of lattices that contain only a single value. */ | |
4727 | ||
eb270950 | 4728 | static vec<ipa_agg_value> |
99b1c316 | 4729 | copy_plats_to_inter (class ipcp_param_lattices *plats, HOST_WIDE_INT offset) |
2c9561b5 | 4730 | { |
eb270950 | 4731 | vec<ipa_agg_value> res = vNULL; |
2c9561b5 MJ |
4732 | |
4733 | if (!plats->aggs || plats->aggs_contain_variable || plats->aggs_bottom) | |
6e1aa848 | 4734 | return vNULL; |
2c9561b5 MJ |
4735 | |
4736 | for (struct ipcp_agg_lattice *aglat = plats->aggs; aglat; aglat = aglat->next) | |
c0cb5055 | 4737 | if (aglat->is_single_const ()) |
2c9561b5 | 4738 | { |
eb270950 | 4739 | struct ipa_agg_value ti; |
2c9561b5 MJ |
4740 | ti.offset = aglat->offset - offset; |
4741 | ti.value = aglat->values->value; | |
9771b263 | 4742 | res.safe_push (ti); |
2c9561b5 MJ |
4743 | } |
4744 | return res; | |
4745 | } | |
4746 | ||
4747 | /* Intersect all values in INTER with single value lattices in PLATS (while | |
4748 | subtracting OFFSET). */ | |
4749 | ||
4750 | static void | |
99b1c316 | 4751 | intersect_with_plats (class ipcp_param_lattices *plats, |
eb270950 | 4752 | vec<ipa_agg_value> *inter, |
2c9561b5 MJ |
4753 | HOST_WIDE_INT offset) |
4754 | { | |
4755 | struct ipcp_agg_lattice *aglat; | |
eb270950 | 4756 | struct ipa_agg_value *item; |
2c9561b5 MJ |
4757 | int k; |
4758 | ||
4759 | if (!plats->aggs || plats->aggs_contain_variable || plats->aggs_bottom) | |
4760 | { | |
9771b263 | 4761 | inter->release (); |
2c9561b5 MJ |
4762 | return; |
4763 | } | |
4764 | ||
4765 | aglat = plats->aggs; | |
9771b263 | 4766 | FOR_EACH_VEC_ELT (*inter, k, item) |
2c9561b5 MJ |
4767 | { |
4768 | bool found = false; | |
4769 | if (!item->value) | |
4770 | continue; | |
4771 | while (aglat) | |
4772 | { | |
4773 | if (aglat->offset - offset > item->offset) | |
4774 | break; | |
4775 | if (aglat->offset - offset == item->offset) | |
4776 | { | |
4777 | gcc_checking_assert (item->value); | |
951e27f5 FX |
4778 | if (aglat->is_single_const ()) |
4779 | { | |
4780 | tree value = aglat->values->value; | |
4781 | ||
4782 | if (values_equal_for_ipcp_p (item->value, value)) | |
4783 | found = true; | |
4784 | else if (item->value == error_mark_node) | |
4785 | { | |
4786 | /* Replace unknown place holder value with real one. */ | |
4787 | item->value = value; | |
4788 | found = true; | |
4789 | } | |
4790 | } | |
2c9561b5 MJ |
4791 | break; |
4792 | } | |
4793 | aglat = aglat->next; | |
4794 | } | |
4795 | if (!found) | |
4796 | item->value = NULL_TREE; | |
4797 | } | |
4798 | } | |
4799 | ||
5764ee3c | 4800 | /* Copy aggregate replacement values of NODE (which is an IPA-CP clone) to the |
2c9561b5 MJ |
4801 | vector result while subtracting OFFSET from the individual value offsets. */ |
4802 | ||
eb270950 | 4803 | static vec<ipa_agg_value> |
0fd44da3 MJ |
4804 | agg_replacements_to_vector (struct cgraph_node *node, int index, |
4805 | HOST_WIDE_INT offset) | |
2c9561b5 MJ |
4806 | { |
4807 | struct ipa_agg_replacement_value *av; | |
eb270950 | 4808 | vec<ipa_agg_value> res = vNULL; |
2c9561b5 MJ |
4809 | |
4810 | for (av = ipa_get_agg_replacements_for_node (node); av; av = av->next) | |
0fd44da3 MJ |
4811 | if (av->index == index |
4812 | && (av->offset - offset) >= 0) | |
2c9561b5 | 4813 | { |
eb270950 | 4814 | struct ipa_agg_value item; |
2c9561b5 MJ |
4815 | gcc_checking_assert (av->value); |
4816 | item.offset = av->offset - offset; | |
4817 | item.value = av->value; | |
9771b263 | 4818 | res.safe_push (item); |
2c9561b5 MJ |
4819 | } |
4820 | ||
4821 | return res; | |
4822 | } | |
4823 | ||
4824 | /* Intersect all values in INTER with those that we have already scheduled to | |
4825 | be replaced in parameter number INDEX of NODE, which is an IPA-CP clone | |
4826 | (while subtracting OFFSET). */ | |
4827 | ||
4828 | static void | |
4829 | intersect_with_agg_replacements (struct cgraph_node *node, int index, | |
eb270950 | 4830 | vec<ipa_agg_value> *inter, |
2c9561b5 MJ |
4831 | HOST_WIDE_INT offset) |
4832 | { | |
4833 | struct ipa_agg_replacement_value *srcvals; | |
eb270950 | 4834 | struct ipa_agg_value *item; |
2c9561b5 MJ |
4835 | int i; |
4836 | ||
4837 | srcvals = ipa_get_agg_replacements_for_node (node); | |
4838 | if (!srcvals) | |
4839 | { | |
9771b263 | 4840 | inter->release (); |
2c9561b5 MJ |
4841 | return; |
4842 | } | |
4843 | ||
9771b263 | 4844 | FOR_EACH_VEC_ELT (*inter, i, item) |
2c9561b5 MJ |
4845 | { |
4846 | struct ipa_agg_replacement_value *av; | |
4847 | bool found = false; | |
4848 | if (!item->value) | |
4849 | continue; | |
4850 | for (av = srcvals; av; av = av->next) | |
4851 | { | |
4852 | gcc_checking_assert (av->value); | |
4853 | if (av->index == index | |
4854 | && av->offset - offset == item->offset) | |
4855 | { | |
4856 | if (values_equal_for_ipcp_p (item->value, av->value)) | |
4857 | found = true; | |
951e27f5 FX |
4858 | else if (item->value == error_mark_node) |
4859 | { | |
4860 | /* Replace place holder value with real one. */ | |
4861 | item->value = av->value; | |
4862 | found = true; | |
4863 | } | |
2c9561b5 MJ |
4864 | break; |
4865 | } | |
4866 | } | |
4867 | if (!found) | |
4868 | item->value = NULL_TREE; | |
4869 | } | |
4870 | } | |
4871 | ||
7e9f2b6e MJ |
4872 | /* Intersect values in INTER with aggregate values that come along edge CS to |
4873 | parameter number INDEX and return it. If INTER does not actually exist yet, | |
4874 | copy all incoming values to it. If we determine we ended up with no values | |
4875 | whatsoever, return a released vector. */ | |
4876 | ||
eb270950 | 4877 | static vec<ipa_agg_value> |
7e9f2b6e | 4878 | intersect_aggregates_with_edge (struct cgraph_edge *cs, int index, |
eb270950 | 4879 | vec<ipa_agg_value> inter) |
7e9f2b6e MJ |
4880 | { |
4881 | struct ipa_jump_func *jfunc; | |
4882 | jfunc = ipa_get_ith_jump_func (IPA_EDGE_REF (cs), index); | |
4883 | if (jfunc->type == IPA_JF_PASS_THROUGH | |
4884 | && ipa_get_jf_pass_through_operation (jfunc) == NOP_EXPR) | |
4885 | { | |
99b1c316 | 4886 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
7e9f2b6e MJ |
4887 | int src_idx = ipa_get_jf_pass_through_formal_id (jfunc); |
4888 | ||
4889 | if (caller_info->ipcp_orig_node) | |
4890 | { | |
4891 | struct cgraph_node *orig_node = caller_info->ipcp_orig_node; | |
99b1c316 | 4892 | class ipcp_param_lattices *orig_plats; |
7e9f2b6e MJ |
4893 | orig_plats = ipa_get_parm_lattices (IPA_NODE_REF (orig_node), |
4894 | src_idx); | |
4895 | if (agg_pass_through_permissible_p (orig_plats, jfunc)) | |
4896 | { | |
4897 | if (!inter.exists ()) | |
0fd44da3 | 4898 | inter = agg_replacements_to_vector (cs->caller, src_idx, 0); |
7e9f2b6e MJ |
4899 | else |
4900 | intersect_with_agg_replacements (cs->caller, src_idx, | |
4901 | &inter, 0); | |
4902 | } | |
c8f40352 MJ |
4903 | else |
4904 | { | |
4905 | inter.release (); | |
4906 | return vNULL; | |
4907 | } | |
7e9f2b6e MJ |
4908 | } |
4909 | else | |
4910 | { | |
99b1c316 | 4911 | class ipcp_param_lattices *src_plats; |
7e9f2b6e MJ |
4912 | src_plats = ipa_get_parm_lattices (caller_info, src_idx); |
4913 | if (agg_pass_through_permissible_p (src_plats, jfunc)) | |
4914 | { | |
4915 | /* Currently we do not produce clobber aggregate jump | |
4916 | functions, adjust when we do. */ | |
4917 | gcc_checking_assert (!jfunc->agg.items); | |
4918 | if (!inter.exists ()) | |
4919 | inter = copy_plats_to_inter (src_plats, 0); | |
4920 | else | |
4921 | intersect_with_plats (src_plats, &inter, 0); | |
4922 | } | |
c8f40352 MJ |
4923 | else |
4924 | { | |
4925 | inter.release (); | |
4926 | return vNULL; | |
4927 | } | |
7e9f2b6e MJ |
4928 | } |
4929 | } | |
4930 | else if (jfunc->type == IPA_JF_ANCESTOR | |
4931 | && ipa_get_jf_ancestor_agg_preserved (jfunc)) | |
4932 | { | |
99b1c316 | 4933 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
7e9f2b6e | 4934 | int src_idx = ipa_get_jf_ancestor_formal_id (jfunc); |
99b1c316 | 4935 | class ipcp_param_lattices *src_plats; |
7e9f2b6e MJ |
4936 | HOST_WIDE_INT delta = ipa_get_jf_ancestor_offset (jfunc); |
4937 | ||
4938 | if (caller_info->ipcp_orig_node) | |
4939 | { | |
4940 | if (!inter.exists ()) | |
0fd44da3 | 4941 | inter = agg_replacements_to_vector (cs->caller, src_idx, delta); |
7e9f2b6e | 4942 | else |
0fd44da3 | 4943 | intersect_with_agg_replacements (cs->caller, src_idx, &inter, |
7e9f2b6e MJ |
4944 | delta); |
4945 | } | |
4946 | else | |
4947 | { | |
5de73c05 | 4948 | src_plats = ipa_get_parm_lattices (caller_info, src_idx); |
7e9f2b6e MJ |
4949 | /* Currently we do not produce clobber aggregate jump |
4950 | functions, adjust when we do. */ | |
4951 | gcc_checking_assert (!src_plats->aggs || !jfunc->agg.items); | |
4952 | if (!inter.exists ()) | |
4953 | inter = copy_plats_to_inter (src_plats, delta); | |
4954 | else | |
4955 | intersect_with_plats (src_plats, &inter, delta); | |
4956 | } | |
4957 | } | |
4958 | else if (jfunc->agg.items) | |
4959 | { | |
eb270950 FX |
4960 | class ipa_node_params *caller_info = IPA_NODE_REF (cs->caller); |
4961 | struct ipa_agg_value *item; | |
7e9f2b6e MJ |
4962 | int k; |
4963 | ||
4964 | if (!inter.exists ()) | |
4965 | for (unsigned i = 0; i < jfunc->agg.items->length (); i++) | |
eb270950 FX |
4966 | { |
4967 | struct ipa_agg_jf_item *agg_item = &(*jfunc->agg.items)[i]; | |
951e27f5 FX |
4968 | struct ipa_agg_value agg_value; |
4969 | ||
4970 | if (self_recursive_agg_pass_through_p (cs, agg_item, index)) | |
eb270950 | 4971 | { |
951e27f5 FX |
4972 | /* For a self-recursive call, if aggregate jump function is a |
4973 | simple pass-through, the exact value that it stands for is | |
4974 | not known at this point, which must comes from other call | |
4975 | sites. But we still need to add a place holder in value | |
4976 | sets to indicate it, here we use error_mark_node to | |
4977 | represent the special unknown value, which will be replaced | |
4978 | with real one during later intersecting operations. */ | |
4979 | agg_value.value = error_mark_node; | |
4980 | } | |
4981 | else | |
4982 | { | |
4983 | tree value = ipa_agg_value_from_node (caller_info, cs->caller, | |
4984 | agg_item); | |
4985 | if (!value) | |
4986 | continue; | |
eb270950 | 4987 | |
eb270950 | 4988 | agg_value.value = value; |
eb270950 | 4989 | } |
951e27f5 FX |
4990 | |
4991 | agg_value.offset = agg_item->offset; | |
4992 | inter.safe_push (agg_value); | |
eb270950 | 4993 | } |
7e9f2b6e MJ |
4994 | else |
4995 | FOR_EACH_VEC_ELT (inter, k, item) | |
4996 | { | |
4997 | int l = 0; | |
5de73c05 | 4998 | bool found = false; |
7e9f2b6e MJ |
4999 | |
5000 | if (!item->value) | |
5001 | continue; | |
5002 | ||
5003 | while ((unsigned) l < jfunc->agg.items->length ()) | |
5004 | { | |
5005 | struct ipa_agg_jf_item *ti; | |
5006 | ti = &(*jfunc->agg.items)[l]; | |
5007 | if (ti->offset > item->offset) | |
5008 | break; | |
5009 | if (ti->offset == item->offset) | |
5010 | { | |
951e27f5 FX |
5011 | tree value; |
5012 | ||
5013 | if (self_recursive_agg_pass_through_p (cs, ti, index)) | |
5014 | { | |
5015 | /* A simple aggregate pass-through in self-recursive | |
5016 | call should lead to same value. */ | |
5017 | found = true; | |
5018 | } | |
5019 | else if ((value = ipa_agg_value_from_node (caller_info, | |
5020 | cs->caller, ti))) | |
5021 | { | |
5022 | if (values_equal_for_ipcp_p (item->value, value)) | |
5023 | found = true; | |
5024 | else if (item->value == error_mark_node) | |
5025 | { | |
5026 | /* Replace unknown place holder value with real | |
5027 | one. */ | |
5028 | item->value = value; | |
5029 | found = true; | |
5030 | } | |
5031 | } | |
7e9f2b6e MJ |
5032 | break; |
5033 | } | |
5034 | l++; | |
5035 | } | |
5036 | if (!found) | |
5037 | item->value = NULL; | |
5038 | } | |
5039 | } | |
5040 | else | |
5041 | { | |
c3284718 | 5042 | inter.release (); |
eb270950 | 5043 | return vNULL; |
7e9f2b6e MJ |
5044 | } |
5045 | return inter; | |
5046 | } | |
5047 | ||
2c9561b5 MJ |
5048 | /* Look at edges in CALLERS and collect all known aggregate values that arrive |
5049 | from all of them. */ | |
5050 | ||
5051 | static struct ipa_agg_replacement_value * | |
5052 | find_aggregate_values_for_callers_subset (struct cgraph_node *node, | |
d52f5295 | 5053 | vec<cgraph_edge *> callers) |
2c9561b5 | 5054 | { |
99b1c316 | 5055 | class ipa_node_params *dest_info = IPA_NODE_REF (node); |
6f9549ee MJ |
5056 | struct ipa_agg_replacement_value *res; |
5057 | struct ipa_agg_replacement_value **tail = &res; | |
2c9561b5 | 5058 | struct cgraph_edge *cs; |
dffdd6e5 | 5059 | int i, j, count = ipa_get_param_count (dest_info); |
2c9561b5 | 5060 | |
9771b263 | 5061 | FOR_EACH_VEC_ELT (callers, j, cs) |
2c9561b5 | 5062 | { |
a33c028e JH |
5063 | if (!IPA_EDGE_REF (cs)) |
5064 | { | |
5065 | count = 0; | |
5066 | break; | |
5067 | } | |
2c9561b5 MJ |
5068 | int c = ipa_get_cs_argument_count (IPA_EDGE_REF (cs)); |
5069 | if (c < count) | |
5070 | count = c; | |
5071 | } | |
5072 | ||
155c9907 | 5073 | for (i = 0; i < count; i++) |
2c9561b5 MJ |
5074 | { |
5075 | struct cgraph_edge *cs; | |
eb270950 FX |
5076 | vec<ipa_agg_value> inter = vNULL; |
5077 | struct ipa_agg_value *item; | |
99b1c316 | 5078 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (dest_info, i); |
2c9561b5 MJ |
5079 | int j; |
5080 | ||
5081 | /* Among other things, the following check should deal with all by_ref | |
5082 | mismatches. */ | |
7b920a9a | 5083 | if (plats->aggs_bottom) |
2c9561b5 MJ |
5084 | continue; |
5085 | ||
9771b263 | 5086 | FOR_EACH_VEC_ELT (callers, j, cs) |
2c9561b5 | 5087 | { |
7b668576 MJ |
5088 | struct ipa_jump_func *jfunc |
5089 | = ipa_get_ith_jump_func (IPA_EDGE_REF (cs), i); | |
cf254442 MJ |
5090 | if (self_recursive_pass_through_p (cs, jfunc, i) |
5091 | && (!plats->aggs_by_ref | |
5092 | || ipa_get_jf_pass_through_agg_preserved (jfunc))) | |
7b668576 | 5093 | continue; |
7e9f2b6e | 5094 | inter = intersect_aggregates_with_edge (cs, i, inter); |
2c9561b5 | 5095 | |
9771b263 | 5096 | if (!inter.exists ()) |
2c9561b5 MJ |
5097 | goto next_param; |
5098 | } | |
5099 | ||
9771b263 | 5100 | FOR_EACH_VEC_ELT (inter, j, item) |
2c9561b5 MJ |
5101 | { |
5102 | struct ipa_agg_replacement_value *v; | |
5103 | ||
5104 | if (!item->value) | |
5105 | continue; | |
5106 | ||
951e27f5 FX |
5107 | /* All values must be real values, not unknown place holders. */ |
5108 | gcc_assert (item->value != error_mark_node); | |
5109 | ||
766090c2 | 5110 | v = ggc_alloc<ipa_agg_replacement_value> (); |
2c9561b5 MJ |
5111 | v->index = i; |
5112 | v->offset = item->offset; | |
5113 | v->value = item->value; | |
7b920a9a | 5114 | v->by_ref = plats->aggs_by_ref; |
6f9549ee MJ |
5115 | *tail = v; |
5116 | tail = &v->next; | |
2c9561b5 MJ |
5117 | } |
5118 | ||
5119 | next_param: | |
9771b263 DN |
5120 | if (inter.exists ()) |
5121 | inter.release (); | |
2c9561b5 | 5122 | } |
6f9549ee | 5123 | *tail = NULL; |
2c9561b5 MJ |
5124 | return res; |
5125 | } | |
5126 | ||
2c9561b5 MJ |
5127 | /* Determine whether CS also brings all scalar values that the NODE is |
5128 | specialized for. */ | |
5129 | ||
5130 | static bool | |
5131 | cgraph_edge_brings_all_scalars_for_node (struct cgraph_edge *cs, | |
5132 | struct cgraph_node *node) | |
5133 | { | |
99b1c316 | 5134 | class ipa_node_params *dest_info = IPA_NODE_REF (node); |
2c9561b5 | 5135 | int count = ipa_get_param_count (dest_info); |
99b1c316 MS |
5136 | class ipa_node_params *caller_info; |
5137 | class ipa_edge_args *args; | |
2c9561b5 MJ |
5138 | int i; |
5139 | ||
5140 | caller_info = IPA_NODE_REF (cs->caller); | |
5141 | args = IPA_EDGE_REF (cs); | |
5142 | for (i = 0; i < count; i++) | |
5143 | { | |
5144 | struct ipa_jump_func *jump_func; | |
5145 | tree val, t; | |
5146 | ||
44210a96 | 5147 | val = dest_info->known_csts[i]; |
2c9561b5 MJ |
5148 | if (!val) |
5149 | continue; | |
5150 | ||
5151 | if (i >= ipa_get_cs_argument_count (args)) | |
5152 | return false; | |
5153 | jump_func = ipa_get_ith_jump_func (args, i); | |
e5cf5e11 PK |
5154 | t = ipa_value_from_jfunc (caller_info, jump_func, |
5155 | ipa_get_type (dest_info, i)); | |
2c9561b5 MJ |
5156 | if (!t || !values_equal_for_ipcp_p (val, t)) |
5157 | return false; | |
5158 | } | |
5159 | return true; | |
5160 | } | |
5161 | ||
5162 | /* Determine whether CS also brings all aggregate values that NODE is | |
5163 | specialized for. */ | |
5164 | static bool | |
5165 | cgraph_edge_brings_all_agg_vals_for_node (struct cgraph_edge *cs, | |
5166 | struct cgraph_node *node) | |
5167 | { | |
99b1c316 | 5168 | class ipa_node_params *orig_node_info; |
2c9561b5 | 5169 | struct ipa_agg_replacement_value *aggval; |
7e9f2b6e | 5170 | int i, ec, count; |
2c9561b5 MJ |
5171 | |
5172 | aggval = ipa_get_agg_replacements_for_node (node); | |
7e9f2b6e MJ |
5173 | if (!aggval) |
5174 | return true; | |
5175 | ||
5176 | count = ipa_get_param_count (IPA_NODE_REF (node)); | |
5177 | ec = ipa_get_cs_argument_count (IPA_EDGE_REF (cs)); | |
5178 | if (ec < count) | |
5179 | for (struct ipa_agg_replacement_value *av = aggval; av; av = av->next) | |
5180 | if (aggval->index >= ec) | |
5181 | return false; | |
5182 | ||
9576e7b1 | 5183 | orig_node_info = IPA_NODE_REF (IPA_NODE_REF (node)->ipcp_orig_node); |
7e9f2b6e MJ |
5184 | |
5185 | for (i = 0; i < count; i++) | |
2c9561b5 | 5186 | { |
99b1c316 | 5187 | class ipcp_param_lattices *plats; |
7e9f2b6e MJ |
5188 | bool interesting = false; |
5189 | for (struct ipa_agg_replacement_value *av = aggval; av; av = av->next) | |
5190 | if (aggval->index == i) | |
5191 | { | |
5192 | interesting = true; | |
5193 | break; | |
5194 | } | |
5195 | if (!interesting) | |
5196 | continue; | |
5197 | ||
9576e7b1 | 5198 | plats = ipa_get_parm_lattices (orig_node_info, aggval->index); |
7e9f2b6e | 5199 | if (plats->aggs_bottom) |
2c9561b5 | 5200 | return false; |
2c9561b5 | 5201 | |
8bda7ce8 | 5202 | vec<ipa_agg_value> values = intersect_aggregates_with_edge (cs, i, vNULL); |
c3284718 | 5203 | if (!values.exists ()) |
2c9561b5 MJ |
5204 | return false; |
5205 | ||
7e9f2b6e MJ |
5206 | for (struct ipa_agg_replacement_value *av = aggval; av; av = av->next) |
5207 | if (aggval->index == i) | |
5208 | { | |
eb270950 | 5209 | struct ipa_agg_value *item; |
7e9f2b6e MJ |
5210 | int j; |
5211 | bool found = false; | |
5212 | FOR_EACH_VEC_ELT (values, j, item) | |
5213 | if (item->value | |
5214 | && item->offset == av->offset | |
5215 | && values_equal_for_ipcp_p (item->value, av->value)) | |
c3272a92 PCC |
5216 | { |
5217 | found = true; | |
5218 | break; | |
5219 | } | |
7e9f2b6e MJ |
5220 | if (!found) |
5221 | { | |
c3284718 | 5222 | values.release (); |
7e9f2b6e MJ |
5223 | return false; |
5224 | } | |
5225 | } | |
8bda7ce8 | 5226 | values.release (); |
2c9561b5 MJ |
5227 | } |
5228 | return true; | |
5229 | } | |
5230 | ||
310bc633 MJ |
5231 | /* Given an original NODE and a VAL for which we have already created a |
5232 | specialized clone, look whether there are incoming edges that still lead | |
5233 | into the old node but now also bring the requested value and also conform to | |
026c3cfd | 5234 | all other criteria such that they can be redirected the special node. |
310bc633 | 5235 | This function can therefore redirect the final edge in a SCC. */ |
3e66255c | 5236 | |
c0cb5055 | 5237 | template <typename valtype> |
3e66255c | 5238 | static void |
c0cb5055 | 5239 | perhaps_add_new_callers (cgraph_node *node, ipcp_value<valtype> *val) |
3e66255c | 5240 | { |
c0cb5055 | 5241 | ipcp_value_source<valtype> *src; |
3995f3a2 | 5242 | profile_count redirected_sum = profile_count::zero (); |
3e66255c | 5243 | |
310bc633 | 5244 | for (src = val->sources; src; src = src->next) |
3e66255c | 5245 | { |
310bc633 MJ |
5246 | struct cgraph_edge *cs = src->cs; |
5247 | while (cs) | |
5248 | { | |
7b668576 | 5249 | if (cgraph_edge_brings_value_p (cs, src, node, val) |
47f4756e MJ |
5250 | && cgraph_edge_brings_all_scalars_for_node (cs, val->spec_node) |
5251 | && cgraph_edge_brings_all_agg_vals_for_node (cs, val->spec_node)) | |
310bc633 | 5252 | { |
47f4756e | 5253 | if (dump_file) |
464d0118 ML |
5254 | fprintf (dump_file, " - adding an extra caller %s of %s\n", |
5255 | cs->caller->dump_name (), | |
5256 | val->spec_node->dump_name ()); | |
47f4756e | 5257 | |
6a4bad95 MJ |
5258 | cs->redirect_callee_duplicating_thunks (val->spec_node); |
5259 | val->spec_node->expand_all_artificial_thunks (); | |
1bad9c18 JH |
5260 | if (cs->count.ipa ().initialized_p ()) |
5261 | redirected_sum = redirected_sum + cs->count.ipa (); | |
310bc633 MJ |
5262 | } |
5263 | cs = get_next_cgraph_edge_clone (cs); | |
5264 | } | |
3e66255c | 5265 | } |
310bc633 | 5266 | |
e3951b03 | 5267 | if (redirected_sum.nonzero_p ()) |
310bc633 | 5268 | update_specialized_profile (val->spec_node, node, redirected_sum); |
3e66255c MJ |
5269 | } |
5270 | ||
44210a96 | 5271 | /* Return true if KNOWN_CONTEXTS contain at least one useful context. */ |
3e66255c | 5272 | |
44210a96 MJ |
5273 | static bool |
5274 | known_contexts_useful_p (vec<ipa_polymorphic_call_context> known_contexts) | |
5275 | { | |
5276 | ipa_polymorphic_call_context *ctx; | |
5277 | int i; | |
5278 | ||
5279 | FOR_EACH_VEC_ELT (known_contexts, i, ctx) | |
5280 | if (!ctx->useless_p ()) | |
5281 | return true; | |
5282 | return false; | |
5283 | } | |
5284 | ||
5285 | /* Return a copy of KNOWN_CSTS if it is not empty, otherwise return vNULL. */ | |
5286 | ||
5287 | static vec<ipa_polymorphic_call_context> | |
5288 | copy_useful_known_contexts (vec<ipa_polymorphic_call_context> known_contexts) | |
5289 | { | |
5290 | if (known_contexts_useful_p (known_contexts)) | |
5291 | return known_contexts.copy (); | |
5292 | else | |
5293 | return vNULL; | |
5294 | } | |
5295 | ||
5296 | /* Copy KNOWN_CSTS and modify the copy according to VAL and INDEX. If | |
5297 | non-empty, replace KNOWN_CONTEXTS with its copy too. */ | |
310bc633 | 5298 | |
518dc859 | 5299 | static void |
44210a96 MJ |
5300 | modify_known_vectors_with_val (vec<tree> *known_csts, |
5301 | vec<ipa_polymorphic_call_context> *known_contexts, | |
5302 | ipcp_value<tree> *val, | |
5303 | int index) | |
518dc859 | 5304 | { |
44210a96 MJ |
5305 | *known_csts = known_csts->copy (); |
5306 | *known_contexts = copy_useful_known_contexts (*known_contexts); | |
5307 | (*known_csts)[index] = val->value; | |
5308 | } | |
518dc859 | 5309 | |
44210a96 MJ |
5310 | /* Replace KNOWN_CSTS with its copy. Also copy KNOWN_CONTEXTS and modify the |
5311 | copy according to VAL and INDEX. */ | |
5312 | ||
5313 | static void | |
5314 | modify_known_vectors_with_val (vec<tree> *known_csts, | |
5315 | vec<ipa_polymorphic_call_context> *known_contexts, | |
5316 | ipcp_value<ipa_polymorphic_call_context> *val, | |
5317 | int index) | |
5318 | { | |
5319 | *known_csts = known_csts->copy (); | |
5320 | *known_contexts = known_contexts->copy (); | |
5321 | (*known_contexts)[index] = val->value; | |
310bc633 | 5322 | } |
5e45130d | 5323 | |
44210a96 MJ |
5324 | /* Return true if OFFSET indicates this was not an aggregate value or there is |
5325 | a replacement equivalent to VALUE, INDEX and OFFSET among those in the | |
5326 | AGGVALS list. */ | |
2c9561b5 MJ |
5327 | |
5328 | DEBUG_FUNCTION bool | |
44210a96 MJ |
5329 | ipcp_val_agg_replacement_ok_p (ipa_agg_replacement_value *aggvals, |
5330 | int index, HOST_WIDE_INT offset, tree value) | |
2c9561b5 | 5331 | { |
44210a96 MJ |
5332 | if (offset == -1) |
5333 | return true; | |
5334 | ||
2c9561b5 MJ |
5335 | while (aggvals) |
5336 | { | |
5337 | if (aggvals->index == index | |
5338 | && aggvals->offset == offset | |
5339 | && values_equal_for_ipcp_p (aggvals->value, value)) | |
5340 | return true; | |
5341 | aggvals = aggvals->next; | |
5342 | } | |
5343 | return false; | |
5344 | } | |
5345 | ||
f25ae20e | 5346 | /* Return true if offset is minus one because source of a polymorphic context |
44210a96 MJ |
5347 | cannot be an aggregate value. */ |
5348 | ||
5349 | DEBUG_FUNCTION bool | |
5350 | ipcp_val_agg_replacement_ok_p (ipa_agg_replacement_value *, | |
5351 | int , HOST_WIDE_INT offset, | |
5352 | ipa_polymorphic_call_context) | |
5353 | { | |
5354 | return offset == -1; | |
5355 | } | |
5356 | ||
f25ae20e | 5357 | /* Decide whether to create a special version of NODE for value VAL of parameter |
2c9561b5 MJ |
5358 | at the given INDEX. If OFFSET is -1, the value is for the parameter itself, |
5359 | otherwise it is stored at the given OFFSET of the parameter. KNOWN_CSTS, | |
44210a96 | 5360 | KNOWN_CONTEXTS and KNOWN_AGGS describe the other already known values. */ |
2c9561b5 | 5361 | |
c0cb5055 | 5362 | template <typename valtype> |
2c9561b5 MJ |
5363 | static bool |
5364 | decide_about_value (struct cgraph_node *node, int index, HOST_WIDE_INT offset, | |
c0cb5055 | 5365 | ipcp_value<valtype> *val, vec<tree> known_csts, |
44210a96 | 5366 | vec<ipa_polymorphic_call_context> known_contexts) |
2c9561b5 MJ |
5367 | { |
5368 | struct ipa_agg_replacement_value *aggvals; | |
5369 | int freq_sum, caller_count; | |
3995f3a2 | 5370 | profile_count count_sum; |
d52f5295 | 5371 | vec<cgraph_edge *> callers; |
2c9561b5 MJ |
5372 | |
5373 | if (val->spec_node) | |
5374 | { | |
5375 | perhaps_add_new_callers (node, val); | |
5376 | return false; | |
5377 | } | |
5378 | else if (val->local_size_cost + overall_size > max_new_size) | |
5379 | { | |
5380 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
5381 | fprintf (dump_file, " Ignoring candidate value because " | |
5382 | "max_new_size would be reached with %li.\n", | |
5383 | val->local_size_cost + overall_size); | |
5384 | return false; | |
5385 | } | |
47f4756e | 5386 | else if (!get_info_about_necessary_edges (val, node, &freq_sum, &count_sum, |
2c9561b5 MJ |
5387 | &caller_count)) |
5388 | return false; | |
5389 | ||
5390 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
5391 | { | |
5392 | fprintf (dump_file, " - considering value "); | |
5393 | print_ipcp_constant_value (dump_file, val->value); | |
0e8853ee JH |
5394 | fprintf (dump_file, " for "); |
5395 | ipa_dump_param (dump_file, IPA_NODE_REF (node), index); | |
2c9561b5 MJ |
5396 | if (offset != -1) |
5397 | fprintf (dump_file, ", offset: " HOST_WIDE_INT_PRINT_DEC, offset); | |
5398 | fprintf (dump_file, " (caller_count: %i)\n", caller_count); | |
5399 | } | |
5400 | ||
5401 | if (!good_cloning_opportunity_p (node, val->local_time_benefit, | |
5402 | freq_sum, count_sum, | |
5403 | val->local_size_cost) | |
5404 | && !good_cloning_opportunity_p (node, | |
5405 | val->local_time_benefit | |
5406 | + val->prop_time_benefit, | |
5407 | freq_sum, count_sum, | |
5408 | val->local_size_cost | |
5409 | + val->prop_size_cost)) | |
5410 | return false; | |
5411 | ||
5412 | if (dump_file) | |
464d0118 ML |
5413 | fprintf (dump_file, " Creating a specialized node of %s.\n", |
5414 | node->dump_name ()); | |
2c9561b5 | 5415 | |
47f4756e | 5416 | callers = gather_edges_for_value (val, node, caller_count); |
2c9561b5 | 5417 | if (offset == -1) |
44210a96 MJ |
5418 | modify_known_vectors_with_val (&known_csts, &known_contexts, val, index); |
5419 | else | |
5420 | { | |
5421 | known_csts = known_csts.copy (); | |
5422 | known_contexts = copy_useful_known_contexts (known_contexts); | |
5423 | } | |
5424 | find_more_scalar_values_for_callers_subset (node, known_csts, callers); | |
5425 | find_more_contexts_for_caller_subset (node, &known_contexts, callers); | |
2c9561b5 | 5426 | aggvals = find_aggregate_values_for_callers_subset (node, callers); |
44210a96 MJ |
5427 | gcc_checking_assert (ipcp_val_agg_replacement_ok_p (aggvals, index, |
5428 | offset, val->value)); | |
5429 | val->spec_node = create_specialized_node (node, known_csts, known_contexts, | |
5430 | aggvals, callers); | |
2c9561b5 MJ |
5431 | overall_size += val->local_size_cost; |
5432 | ||
5433 | /* TODO: If for some lattice there is only one other known value | |
5434 | left, make a special node for it too. */ | |
5435 | ||
5436 | return true; | |
5437 | } | |
5e45130d | 5438 | |
310bc633 | 5439 | /* Decide whether and what specialized clones of NODE should be created. */ |
5e45130d | 5440 | |
310bc633 MJ |
5441 | static bool |
5442 | decide_whether_version_node (struct cgraph_node *node) | |
5443 | { | |
99b1c316 | 5444 | class ipa_node_params *info = IPA_NODE_REF (node); |
310bc633 | 5445 | int i, count = ipa_get_param_count (info); |
44210a96 MJ |
5446 | vec<tree> known_csts; |
5447 | vec<ipa_polymorphic_call_context> known_contexts; | |
310bc633 | 5448 | bool ret = false; |
5e45130d | 5449 | |
310bc633 MJ |
5450 | if (count == 0) |
5451 | return false; | |
5e45130d | 5452 | |
310bc633 | 5453 | if (dump_file && (dump_flags & TDF_DETAILS)) |
464d0118 ML |
5454 | fprintf (dump_file, "\nEvaluating opportunities for %s.\n", |
5455 | node->dump_name ()); | |
5e45130d | 5456 | |
44210a96 | 5457 | gather_context_independent_values (info, &known_csts, &known_contexts, |
eb270950 | 5458 | NULL, NULL); |
5e45130d | 5459 | |
155c9907 | 5460 | for (i = 0; i < count;i++) |
310bc633 | 5461 | { |
99b1c316 | 5462 | class ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); |
c0cb5055 | 5463 | ipcp_lattice<tree> *lat = &plats->itself; |
44210a96 | 5464 | ipcp_lattice<ipa_polymorphic_call_context> *ctxlat = &plats->ctxlat; |
5e45130d | 5465 | |
2c9561b5 | 5466 | if (!lat->bottom |
44210a96 MJ |
5467 | && !known_csts[i]) |
5468 | { | |
5469 | ipcp_value<tree> *val; | |
5470 | for (val = lat->values; val; val = val->next) | |
5471 | ret |= decide_about_value (node, i, -1, val, known_csts, | |
5472 | known_contexts); | |
5473 | } | |
61e03ffc | 5474 | |
eb20b778 | 5475 | if (!plats->aggs_bottom) |
518dc859 | 5476 | { |
2c9561b5 | 5477 | struct ipcp_agg_lattice *aglat; |
c0cb5055 | 5478 | ipcp_value<tree> *val; |
2c9561b5 MJ |
5479 | for (aglat = plats->aggs; aglat; aglat = aglat->next) |
5480 | if (!aglat->bottom && aglat->values | |
5481 | /* If the following is false, the one value is in | |
5482 | known_aggs. */ | |
5483 | && (plats->aggs_contain_variable | |
c0cb5055 | 5484 | || !aglat->is_single_const ())) |
2c9561b5 MJ |
5485 | for (val = aglat->values; val; val = val->next) |
5486 | ret |= decide_about_value (node, i, aglat->offset, val, | |
44210a96 | 5487 | known_csts, known_contexts); |
cc58ceee | 5488 | } |
44210a96 MJ |
5489 | |
5490 | if (!ctxlat->bottom | |
5491 | && known_contexts[i].useless_p ()) | |
5492 | { | |
5493 | ipcp_value<ipa_polymorphic_call_context> *val; | |
5494 | for (val = ctxlat->values; val; val = val->next) | |
5495 | ret |= decide_about_value (node, i, -1, val, known_csts, | |
5496 | known_contexts); | |
5497 | } | |
5498 | ||
155c9907 | 5499 | info = IPA_NODE_REF (node); |
310bc633 | 5500 | } |
cc58ceee | 5501 | |
eb20b778 | 5502 | if (info->do_clone_for_all_contexts) |
310bc633 | 5503 | { |
eb20b778 | 5504 | struct cgraph_node *clone; |
d52f5295 | 5505 | vec<cgraph_edge *> callers; |
cc58ceee | 5506 | |
310bc633 | 5507 | if (dump_file) |
464d0118 ML |
5508 | fprintf (dump_file, " - Creating a specialized node of %s " |
5509 | "for all known contexts.\n", node->dump_name ()); | |
5e45130d | 5510 | |
d52f5295 | 5511 | callers = node->collect_callers (); |
7b668576 MJ |
5512 | find_more_scalar_values_for_callers_subset (node, known_csts, callers); |
5513 | find_more_contexts_for_caller_subset (node, &known_contexts, callers); | |
5514 | ipa_agg_replacement_value *aggvals | |
5515 | = find_aggregate_values_for_callers_subset (node, callers); | |
44210a96 MJ |
5516 | |
5517 | if (!known_contexts_useful_p (known_contexts)) | |
5518 | { | |
5519 | known_contexts.release (); | |
5520 | known_contexts = vNULL; | |
5521 | } | |
5522 | clone = create_specialized_node (node, known_csts, known_contexts, | |
7b668576 | 5523 | aggvals, callers); |
310bc633 | 5524 | info = IPA_NODE_REF (node); |
eb20b778 MJ |
5525 | info->do_clone_for_all_contexts = false; |
5526 | IPA_NODE_REF (clone)->is_all_contexts_clone = true; | |
310bc633 MJ |
5527 | ret = true; |
5528 | } | |
5529 | else | |
44210a96 MJ |
5530 | { |
5531 | known_csts.release (); | |
5532 | known_contexts.release (); | |
5533 | } | |
5e45130d | 5534 | |
310bc633 MJ |
5535 | return ret; |
5536 | } | |
9187e02d | 5537 | |
310bc633 | 5538 | /* Transitively mark all callees of NODE within the same SCC as not dead. */ |
3949c4a7 | 5539 | |
310bc633 MJ |
5540 | static void |
5541 | spread_undeadness (struct cgraph_node *node) | |
5542 | { | |
5543 | struct cgraph_edge *cs; | |
5e45130d | 5544 | |
310bc633 | 5545 | for (cs = node->callees; cs; cs = cs->next_callee) |
4cb13597 | 5546 | if (ipa_edge_within_scc (cs)) |
310bc633 MJ |
5547 | { |
5548 | struct cgraph_node *callee; | |
99b1c316 | 5549 | class ipa_node_params *info; |
129a37fc | 5550 | |
d52f5295 | 5551 | callee = cs->callee->function_symbol (NULL); |
310bc633 | 5552 | info = IPA_NODE_REF (callee); |
5e45130d | 5553 | |
3c4fa8a8 | 5554 | if (info && info->node_dead) |
310bc633 MJ |
5555 | { |
5556 | info->node_dead = 0; | |
5557 | spread_undeadness (callee); | |
5558 | } | |
5559 | } | |
5560 | } | |
5561 | ||
5562 | /* Return true if NODE has a caller from outside of its SCC that is not | |
5563 | dead. Worker callback for cgraph_for_node_and_aliases. */ | |
5564 | ||
5565 | static bool | |
5566 | has_undead_caller_from_outside_scc_p (struct cgraph_node *node, | |
155c9907 | 5567 | void *data ATTRIBUTE_UNUSED) |
310bc633 MJ |
5568 | { |
5569 | struct cgraph_edge *cs; | |
5570 | ||
5571 | for (cs = node->callers; cs; cs = cs->next_caller) | |
5572 | if (cs->caller->thunk.thunk_p | |
d52f5295 ML |
5573 | && cs->caller->call_for_symbol_thunks_and_aliases |
5574 | (has_undead_caller_from_outside_scc_p, NULL, true)) | |
310bc633 | 5575 | return true; |
4cb13597 | 5576 | else if (!ipa_edge_within_scc (cs) |
310bc633 MJ |
5577 | && !IPA_NODE_REF (cs->caller)->node_dead) |
5578 | return true; | |
5579 | return false; | |
5580 | } | |
5581 | ||
5582 | ||
5583 | /* Identify nodes within the same SCC as NODE which are no longer needed | |
5584 | because of new clones and will be removed as unreachable. */ | |
5585 | ||
5586 | static void | |
5587 | identify_dead_nodes (struct cgraph_node *node) | |
5588 | { | |
5589 | struct cgraph_node *v; | |
155c9907 | 5590 | for (v = node; v; v = ((struct ipa_dfs_info *) v->aux)->next_cycle) |
87f94429 | 5591 | if (v->local |
3c4fa8a8 | 5592 | && IPA_NODE_REF (v) |
d52f5295 ML |
5593 | && !v->call_for_symbol_thunks_and_aliases |
5594 | (has_undead_caller_from_outside_scc_p, NULL, true)) | |
310bc633 MJ |
5595 | IPA_NODE_REF (v)->node_dead = 1; |
5596 | ||
155c9907 | 5597 | for (v = node; v; v = ((struct ipa_dfs_info *) v->aux)->next_cycle) |
3c4fa8a8 | 5598 | if (IPA_NODE_REF (v) && !IPA_NODE_REF (v)->node_dead) |
310bc633 MJ |
5599 | spread_undeadness (v); |
5600 | ||
5601 | if (dump_file && (dump_flags & TDF_DETAILS)) | |
5602 | { | |
155c9907 | 5603 | for (v = node; v; v = ((struct ipa_dfs_info *) v->aux)->next_cycle) |
3c4fa8a8 | 5604 | if (IPA_NODE_REF (v) && IPA_NODE_REF (v)->node_dead) |
464d0118 | 5605 | fprintf (dump_file, " Marking node as dead: %s.\n", v->dump_name ()); |
5e45130d | 5606 | } |
310bc633 MJ |
5607 | } |
5608 | ||
5609 | /* The decision stage. Iterate over the topological order of call graph nodes | |
5610 | TOPO and make specialized clones if deemed beneficial. */ | |
5611 | ||
5612 | static void | |
99b1c316 | 5613 | ipcp_decision_stage (class ipa_topo_info *topo) |
310bc633 MJ |
5614 | { |
5615 | int i; | |
5616 | ||
5617 | if (dump_file) | |
5618 | fprintf (dump_file, "\nIPA decision stage:\n\n"); | |
5e45130d | 5619 | |
310bc633 | 5620 | for (i = topo->nnodes - 1; i >= 0; i--) |
5e45130d | 5621 | { |
310bc633 MJ |
5622 | struct cgraph_node *node = topo->order[i]; |
5623 | bool change = false, iterate = true; | |
5624 | ||
5625 | while (iterate) | |
5626 | { | |
5627 | struct cgraph_node *v; | |
5628 | iterate = false; | |
155c9907 | 5629 | for (v = node; v; v = ((struct ipa_dfs_info *) v->aux)->next_cycle) |
d52f5295 | 5630 | if (v->has_gimple_body_p () |
310bc633 MJ |
5631 | && ipcp_versionable_function_p (v)) |
5632 | iterate |= decide_whether_version_node (v); | |
5633 | ||
5634 | change |= iterate; | |
5635 | } | |
5636 | if (change) | |
5637 | identify_dead_nodes (node); | |
518dc859 | 5638 | } |
518dc859 RL |
5639 | } |
5640 | ||
209ca542 PK |
5641 | /* Look up all the bits information that we have discovered and copy it over |
5642 | to the transformation summary. */ | |
5643 | ||
5644 | static void | |
5645 | ipcp_store_bits_results (void) | |
5646 | { | |
5647 | cgraph_node *node; | |
5648 | ||
5649 | FOR_EACH_FUNCTION_WITH_GIMPLE_BODY (node) | |
5650 | { | |
5651 | ipa_node_params *info = IPA_NODE_REF (node); | |
5652 | bool dumped_sth = false; | |
5653 | bool found_useful_result = false; | |
5654 | ||
6cf67b62 | 5655 | if (!opt_for_fn (node->decl, flag_ipa_bit_cp) || !info) |
209ca542 PK |
5656 | { |
5657 | if (dump_file) | |
5658 | fprintf (dump_file, "Not considering %s for ipa bitwise propagation " | |
15bbb5cc | 5659 | "; -fipa-bit-cp: disabled.\n", |
209ca542 PK |
5660 | node->name ()); |
5661 | continue; | |
5662 | } | |
5663 | ||
5664 | if (info->ipcp_orig_node) | |
5665 | info = IPA_NODE_REF (info->ipcp_orig_node); | |
68188fff MJ |
5666 | if (!info->lattices) |
5667 | /* Newly expanded artificial thunks do not have lattices. */ | |
5668 | continue; | |
209ca542 PK |
5669 | |
5670 | unsigned count = ipa_get_param_count (info); | |
5671 | for (unsigned i = 0; i < count; i++) | |
5672 | { | |
5673 | ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); | |
5674 | if (plats->bits_lattice.constant_p ()) | |
5675 | { | |
5676 | found_useful_result = true; | |
5677 | break; | |
5678 | } | |
5679 | } | |
5680 | ||
155c9907 JJ |
5681 | if (!found_useful_result) |
5682 | continue; | |
209ca542 | 5683 | |
9d3e0adc ML |
5684 | ipcp_transformation_initialize (); |
5685 | ipcp_transformation *ts = ipcp_transformation_sum->get_create (node); | |
155c9907 | 5686 | vec_safe_reserve_exact (ts->bits, count); |
209ca542 | 5687 | |
155c9907 JJ |
5688 | for (unsigned i = 0; i < count; i++) |
5689 | { | |
5690 | ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); | |
86cd0334 | 5691 | ipa_bits *jfbits; |
209ca542 | 5692 | |
155c9907 | 5693 | if (plats->bits_lattice.constant_p ()) |
86cd0334 MJ |
5694 | jfbits |
5695 | = ipa_get_ipa_bits_for_value (plats->bits_lattice.get_value (), | |
5696 | plats->bits_lattice.get_mask ()); | |
155c9907 | 5697 | else |
86cd0334 | 5698 | jfbits = NULL; |
209ca542 | 5699 | |
86cd0334 MJ |
5700 | ts->bits->quick_push (jfbits); |
5701 | if (!dump_file || !jfbits) | |
155c9907 JJ |
5702 | continue; |
5703 | if (!dumped_sth) | |
5704 | { | |
464d0118 ML |
5705 | fprintf (dump_file, "Propagated bits info for function %s:\n", |
5706 | node->dump_name ()); | |
155c9907 JJ |
5707 | dumped_sth = true; |
5708 | } | |
5709 | fprintf (dump_file, " param %i: value = ", i); | |
86cd0334 | 5710 | print_hex (jfbits->value, dump_file); |
155c9907 | 5711 | fprintf (dump_file, ", mask = "); |
86cd0334 | 5712 | print_hex (jfbits->mask, dump_file); |
155c9907 JJ |
5713 | fprintf (dump_file, "\n"); |
5714 | } | |
209ca542 PK |
5715 | } |
5716 | } | |
8bc5448f KV |
5717 | |
5718 | /* Look up all VR information that we have discovered and copy it over | |
5719 | to the transformation summary. */ | |
5720 | ||
5721 | static void | |
5722 | ipcp_store_vr_results (void) | |
5723 | { | |
5724 | cgraph_node *node; | |
5725 | ||
5726 | FOR_EACH_FUNCTION_WITH_GIMPLE_BODY (node) | |
155c9907 JJ |
5727 | { |
5728 | ipa_node_params *info = IPA_NODE_REF (node); | |
5729 | bool found_useful_result = false; | |
8bc5448f | 5730 | |
a09ccc22 | 5731 | if (!info || !opt_for_fn (node->decl, flag_ipa_vrp)) |
155c9907 JJ |
5732 | { |
5733 | if (dump_file) | |
5734 | fprintf (dump_file, "Not considering %s for VR discovery " | |
5735 | "and propagate; -fipa-ipa-vrp: disabled.\n", | |
5736 | node->name ()); | |
5737 | continue; | |
5738 | } | |
8bc5448f | 5739 | |
155c9907 JJ |
5740 | if (info->ipcp_orig_node) |
5741 | info = IPA_NODE_REF (info->ipcp_orig_node); | |
68188fff MJ |
5742 | if (!info->lattices) |
5743 | /* Newly expanded artificial thunks do not have lattices. */ | |
5744 | continue; | |
8bc5448f | 5745 | |
155c9907 JJ |
5746 | unsigned count = ipa_get_param_count (info); |
5747 | for (unsigned i = 0; i < count; i++) | |
5748 | { | |
5749 | ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); | |
5750 | if (!plats->m_value_range.bottom_p () | |
5751 | && !plats->m_value_range.top_p ()) | |
5752 | { | |
5753 | found_useful_result = true; | |
5754 | break; | |
5755 | } | |
5756 | } | |
5757 | if (!found_useful_result) | |
5758 | continue; | |
8bc5448f | 5759 | |
9d3e0adc ML |
5760 | ipcp_transformation_initialize (); |
5761 | ipcp_transformation *ts = ipcp_transformation_sum->get_create (node); | |
155c9907 | 5762 | vec_safe_reserve_exact (ts->m_vr, count); |
8bc5448f | 5763 | |
155c9907 JJ |
5764 | for (unsigned i = 0; i < count; i++) |
5765 | { | |
5766 | ipcp_param_lattices *plats = ipa_get_parm_lattices (info, i); | |
5767 | ipa_vr vr; | |
8bc5448f | 5768 | |
155c9907 JJ |
5769 | if (!plats->m_value_range.bottom_p () |
5770 | && !plats->m_value_range.top_p ()) | |
5771 | { | |
5772 | vr.known = true; | |
54994253 AH |
5773 | vr.type = plats->m_value_range.m_vr.kind (); |
5774 | vr.min = wi::to_wide (plats->m_value_range.m_vr.min ()); | |
5775 | vr.max = wi::to_wide (plats->m_value_range.m_vr.max ()); | |
155c9907 JJ |
5776 | } |
5777 | else | |
5778 | { | |
5779 | vr.known = false; | |
5780 | vr.type = VR_VARYING; | |
5781 | vr.min = vr.max = wi::zero (INT_TYPE_SIZE); | |
5782 | } | |
5783 | ts->m_vr->quick_push (vr); | |
5784 | } | |
5785 | } | |
8bc5448f KV |
5786 | } |
5787 | ||
518dc859 | 5788 | /* The IPCP driver. */ |
310bc633 | 5789 | |
3cc1cccc | 5790 | static unsigned int |
518dc859 RL |
5791 | ipcp_driver (void) |
5792 | { | |
99b1c316 | 5793 | class ipa_topo_info topo; |
310bc633 | 5794 | |
1ac2bdb4 ML |
5795 | if (edge_clone_summaries == NULL) |
5796 | edge_clone_summaries = new edge_clone_summary_t (symtab); | |
5797 | ||
310bc633 MJ |
5798 | ipa_check_create_node_params (); |
5799 | ipa_check_create_edge_args (); | |
9e0b0ec3 | 5800 | clone_num_suffixes = new hash_map<const char *, unsigned>; |
aef83682 | 5801 | |
518dc859 RL |
5802 | if (dump_file) |
5803 | { | |
ca30a539 JH |
5804 | fprintf (dump_file, "\nIPA structures before propagation:\n"); |
5805 | if (dump_flags & TDF_DETAILS) | |
155c9907 | 5806 | ipa_print_all_params (dump_file); |
ca30a539 | 5807 | ipa_print_all_jump_functions (dump_file); |
518dc859 | 5808 | } |
310bc633 MJ |
5809 | |
5810 | /* Topological sort. */ | |
5811 | build_toporder_info (&topo); | |
5812 | /* Do the interprocedural propagation. */ | |
5813 | ipcp_propagate_stage (&topo); | |
5814 | /* Decide what constant propagation and cloning should be performed. */ | |
5815 | ipcp_decision_stage (&topo); | |
209ca542 PK |
5816 | /* Store results of bits propagation. */ |
5817 | ipcp_store_bits_results (); | |
8bc5448f KV |
5818 | /* Store results of value range propagation. */ |
5819 | ipcp_store_vr_results (); | |
310bc633 | 5820 | |
518dc859 | 5821 | /* Free all IPCP structures. */ |
53aedcce | 5822 | delete clone_num_suffixes; |
310bc633 | 5823 | free_toporder_info (&topo); |
1ac2bdb4 | 5824 | delete edge_clone_summaries; |
e67343d7 | 5825 | edge_clone_summaries = NULL; |
e33c6cd6 | 5826 | ipa_free_all_structures_after_ipa_cp (); |
518dc859 RL |
5827 | if (dump_file) |
5828 | fprintf (dump_file, "\nIPA constant propagation end\n"); | |
c2924966 | 5829 | return 0; |
518dc859 RL |
5830 | } |
5831 | ||
3949c4a7 MJ |
5832 | /* Initialization and computation of IPCP data structures. This is the initial |
5833 | intraprocedural analysis of functions, which gathers information to be | |
5834 | propagated later on. */ | |
5835 | ||
129a37fc JH |
5836 | static void |
5837 | ipcp_generate_summary (void) | |
5838 | { | |
3949c4a7 MJ |
5839 | struct cgraph_node *node; |
5840 | ||
129a37fc JH |
5841 | if (dump_file) |
5842 | fprintf (dump_file, "\nIPA constant propagation start:\n"); | |
129a37fc | 5843 | ipa_register_cgraph_hooks (); |
3949c4a7 | 5844 | |
c47d0034 | 5845 | FOR_EACH_FUNCTION_WITH_GIMPLE_BODY (node) |
7e729474 | 5846 | ipa_analyze_node (node); |
129a37fc JH |
5847 | } |
5848 | ||
fb3f88cc | 5849 | /* Write ipcp summary for nodes in SET. */ |
310bc633 | 5850 | |
fb3f88cc | 5851 | static void |
f27c1867 | 5852 | ipcp_write_summary (void) |
fb3f88cc | 5853 | { |
f27c1867 | 5854 | ipa_prop_write_jump_functions (); |
fb3f88cc JH |
5855 | } |
5856 | ||
5857 | /* Read ipcp summary. */ | |
310bc633 | 5858 | |
fb3f88cc JH |
5859 | static void |
5860 | ipcp_read_summary (void) | |
5861 | { | |
5862 | ipa_prop_read_jump_functions (); | |
5863 | } | |
5864 | ||
27a4cd48 DM |
5865 | namespace { |
5866 | ||
5867 | const pass_data pass_data_ipa_cp = | |
5868 | { | |
5869 | IPA_PASS, /* type */ | |
5870 | "cp", /* name */ | |
5871 | OPTGROUP_NONE, /* optinfo_flags */ | |
27a4cd48 DM |
5872 | TV_IPA_CONSTANT_PROP, /* tv_id */ |
5873 | 0, /* properties_required */ | |
5874 | 0, /* properties_provided */ | |
5875 | 0, /* properties_destroyed */ | |
5876 | 0, /* todo_flags_start */ | |
5877 | ( TODO_dump_symtab | TODO_remove_functions ), /* todo_flags_finish */ | |
518dc859 | 5878 | }; |
27a4cd48 DM |
5879 | |
5880 | class pass_ipa_cp : public ipa_opt_pass_d | |
5881 | { | |
5882 | public: | |
c3284718 RS |
5883 | pass_ipa_cp (gcc::context *ctxt) |
5884 | : ipa_opt_pass_d (pass_data_ipa_cp, ctxt, | |
5885 | ipcp_generate_summary, /* generate_summary */ | |
5886 | ipcp_write_summary, /* write_summary */ | |
5887 | ipcp_read_summary, /* read_summary */ | |
04be694e | 5888 | ipcp_write_transformation_summaries, /* |
c3284718 | 5889 | write_optimization_summary */ |
04be694e | 5890 | ipcp_read_transformation_summaries, /* |
c3284718 RS |
5891 | read_optimization_summary */ |
5892 | NULL, /* stmt_fixup */ | |
5893 | 0, /* function_transform_todo_flags_start */ | |
5894 | ipcp_transform_function, /* function_transform */ | |
5895 | NULL) /* variable_transform */ | |
27a4cd48 DM |
5896 | {} |
5897 | ||
5898 | /* opt_pass methods: */ | |
1a3d085c TS |
5899 | virtual bool gate (function *) |
5900 | { | |
5901 | /* FIXME: We should remove the optimize check after we ensure we never run | |
5902 | IPA passes when not optimizing. */ | |
2bf86c84 | 5903 | return (flag_ipa_cp && optimize) || in_lto_p; |
1a3d085c TS |
5904 | } |
5905 | ||
be55bfe6 | 5906 | virtual unsigned int execute (function *) { return ipcp_driver (); } |
27a4cd48 DM |
5907 | |
5908 | }; // class pass_ipa_cp | |
5909 | ||
5910 | } // anon namespace | |
5911 | ||
5912 | ipa_opt_pass_d * | |
5913 | make_pass_ipa_cp (gcc::context *ctxt) | |
5914 | { | |
5915 | return new pass_ipa_cp (ctxt); | |
5916 | } | |
3edf64aa DM |
5917 | |
5918 | /* Reset all state within ipa-cp.c so that we can rerun the compiler | |
5919 | within the same process. For use by toplev::finalize. */ | |
5920 | ||
5921 | void | |
5922 | ipa_cp_c_finalize (void) | |
5923 | { | |
e7a74006 | 5924 | max_count = profile_count::uninitialized (); |
3edf64aa DM |
5925 | overall_size = 0; |
5926 | max_new_size = 0; | |
12e088ba | 5927 | ipcp_free_transformation_sum (); |
3edf64aa | 5928 | } |