]> gcc.gnu.org Git - gcc.git/blob - gcc/sched.c
formatting tweaks
[gcc.git] / gcc / sched.c
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 93-95, 1996 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com)
4 Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com)
5
6 This file is part of GNU CC.
7
8 GNU CC is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
11 any later version.
12
13 GNU CC is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
17
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to
20 the Free Software Foundation, 59 Temple Place - Suite 330,
21 Boston, MA 02111-1307, USA. */
22
23 /* Instruction scheduling pass.
24
25 This pass implements list scheduling within basic blocks. It is
26 run after flow analysis, but before register allocation. The
27 scheduler works as follows:
28
29 We compute insn priorities based on data dependencies. Flow
30 analysis only creates a fraction of the data-dependencies we must
31 observe: namely, only those dependencies which the combiner can be
32 expected to use. For this pass, we must therefore create the
33 remaining dependencies we need to observe: register dependencies,
34 memory dependencies, dependencies to keep function calls in order,
35 and the dependence between a conditional branch and the setting of
36 condition codes are all dealt with here.
37
38 The scheduler first traverses the data flow graph, starting with
39 the last instruction, and proceeding to the first, assigning
40 values to insn_priority as it goes. This sorts the instructions
41 topologically by data dependence.
42
43 Once priorities have been established, we order the insns using
44 list scheduling. This works as follows: starting with a list of
45 all the ready insns, and sorted according to priority number, we
46 schedule the insn from the end of the list by placing its
47 predecessors in the list according to their priority order. We
48 consider this insn scheduled by setting the pointer to the "end" of
49 the list to point to the previous insn. When an insn has no
50 predecessors, we either queue it until sufficient time has elapsed
51 or add it to the ready list. As the instructions are scheduled or
52 when stalls are introduced, the queue advances and dumps insns into
53 the ready list. When all insns down to the lowest priority have
54 been scheduled, the critical path of the basic block has been made
55 as short as possible. The remaining insns are then scheduled in
56 remaining slots.
57
58 Function unit conflicts are resolved during reverse list scheduling
59 by tracking the time when each insn is committed to the schedule
60 and from that, the time the function units it uses must be free.
61 As insns on the ready list are considered for scheduling, those
62 that would result in a blockage of the already committed insns are
63 queued until no blockage will result. Among the remaining insns on
64 the ready list to be considered, the first one with the largest
65 potential for causing a subsequent blockage is chosen.
66
67 The following list shows the order in which we want to break ties
68 among insns in the ready list:
69
70 1. choose insn with lowest conflict cost, ties broken by
71 2. choose insn with the longest path to end of bb, ties broken by
72 3. choose insn that kills the most registers, ties broken by
73 4. choose insn that conflicts with the most ready insns, or finally
74 5. choose insn with lowest UID.
75
76 Memory references complicate matters. Only if we can be certain
77 that memory references are not part of the data dependency graph
78 (via true, anti, or output dependence), can we move operations past
79 memory references. To first approximation, reads can be done
80 independently, while writes introduce dependencies. Better
81 approximations will yield fewer dependencies.
82
83 Dependencies set up by memory references are treated in exactly the
84 same way as other dependencies, by using LOG_LINKS.
85
86 Having optimized the critical path, we may have also unduly
87 extended the lifetimes of some registers. If an operation requires
88 that constants be loaded into registers, it is certainly desirable
89 to load those constants as early as necessary, but no earlier.
90 I.e., it will not do to load up a bunch of registers at the
91 beginning of a basic block only to use them at the end, if they
92 could be loaded later, since this may result in excessive register
93 utilization.
94
95 Note that since branches are never in basic blocks, but only end
96 basic blocks, this pass will not do any branch scheduling. But
97 that is ok, since we can use GNU's delayed branch scheduling
98 pass to take care of this case.
99
100 Also note that no further optimizations based on algebraic identities
101 are performed, so this pass would be a good one to perform instruction
102 splitting, such as breaking up a multiply instruction into shifts
103 and adds where that is profitable.
104
105 Given the memory aliasing analysis that this pass should perform,
106 it should be possible to remove redundant stores to memory, and to
107 load values from registers instead of hitting memory.
108
109 This pass must update information that subsequent passes expect to be
110 correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
111 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
112 basic_block_end.
113
114 The information in the line number notes is carefully retained by this
115 pass. All other NOTE insns are grouped in their same relative order at
116 the beginning of basic blocks that have been scheduled. */
117 \f
118 #include <stdio.h>
119 #include "config.h"
120 #include "rtl.h"
121 #include "basic-block.h"
122 #include "regs.h"
123 #include "hard-reg-set.h"
124 #include "flags.h"
125 #include "insn-config.h"
126 #include "insn-attr.h"
127
128 #ifdef INSN_SCHEDULING
129 /* Arrays set up by scheduling for the same respective purposes as
130 similar-named arrays set up by flow analysis. We work with these
131 arrays during the scheduling pass so we can compare values against
132 unscheduled code.
133
134 Values of these arrays are copied at the end of this pass into the
135 arrays set up by flow analysis. */
136 static short *sched_reg_n_deaths;
137 static int *sched_reg_n_calls_crossed;
138 static int *sched_reg_live_length;
139
140 /* Element N is the next insn that sets (hard or pseudo) register
141 N within the current basic block; or zero, if there is no
142 such insn. Needed for new registers which may be introduced
143 by splitting insns. */
144 static rtx *reg_last_uses;
145 static rtx *reg_last_sets;
146 static regset reg_pending_sets;
147 static int reg_pending_sets_all;
148
149 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
150 static int *insn_luid;
151 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
152
153 /* Vector indexed by INSN_UID giving each instruction a priority. */
154 static int *insn_priority;
155 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
156
157 static short *insn_costs;
158 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
159
160 /* Vector indexed by INSN_UID giving an encoding of the function units
161 used. */
162 static short *insn_units;
163 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
164
165 /* Vector indexed by INSN_UID giving an encoding of the blockage range
166 function. The unit and the range are encoded. */
167 static unsigned int *insn_blockage;
168 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
169 #define UNIT_BITS 5
170 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
171 #define ENCODE_BLOCKAGE(U,R) \
172 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
173 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
174 | MAX_BLOCKAGE_COST (R))
175 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
176 #define BLOCKAGE_RANGE(B) \
177 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
178 | (B) & BLOCKAGE_MASK)
179
180 /* Encodings of the `<name>_unit_blockage_range' function. */
181 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
182 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
183
184 #define DONE_PRIORITY -1
185 #define MAX_PRIORITY 0x7fffffff
186 #define TAIL_PRIORITY 0x7ffffffe
187 #define LAUNCH_PRIORITY 0x7f000001
188 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
189 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
190
191 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
192 static int *insn_ref_count;
193 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
194
195 /* Vector indexed by INSN_UID giving line-number note in effect for each
196 insn. For line-number notes, this indicates whether the note may be
197 reused. */
198 static rtx *line_note;
199 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
200
201 /* Vector indexed by basic block number giving the starting line-number
202 for each basic block. */
203 static rtx *line_note_head;
204
205 /* List of important notes we must keep around. This is a pointer to the
206 last element in the list. */
207 static rtx note_list;
208
209 /* Regsets telling whether a given register is live or dead before the last
210 scheduled insn. Must scan the instructions once before scheduling to
211 determine what registers are live or dead at the end of the block. */
212 static regset bb_dead_regs;
213 static regset bb_live_regs;
214
215 /* Regset telling whether a given register is live after the insn currently
216 being scheduled. Before processing an insn, this is equal to bb_live_regs
217 above. This is used so that we can find registers that are newly born/dead
218 after processing an insn. */
219 static regset old_live_regs;
220
221 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
222 during the initial scan and reused later. If there are not exactly as
223 many REG_DEAD notes in the post scheduled code as there were in the
224 prescheduled code then we trigger an abort because this indicates a bug. */
225 static rtx dead_notes;
226
227 /* Queues, etc. */
228
229 /* An instruction is ready to be scheduled when all insns following it
230 have already been scheduled. It is important to ensure that all
231 insns which use its result will not be executed until its result
232 has been computed. An insn is maintained in one of four structures:
233
234 (P) the "Pending" set of insns which cannot be scheduled until
235 their dependencies have been satisfied.
236 (Q) the "Queued" set of insns that can be scheduled when sufficient
237 time has passed.
238 (R) the "Ready" list of unscheduled, uncommitted insns.
239 (S) the "Scheduled" list of insns.
240
241 Initially, all insns are either "Pending" or "Ready" depending on
242 whether their dependencies are satisfied.
243
244 Insns move from the "Ready" list to the "Scheduled" list as they
245 are committed to the schedule. As this occurs, the insns in the
246 "Pending" list have their dependencies satisfied and move to either
247 the "Ready" list or the "Queued" set depending on whether
248 sufficient time has passed to make them ready. As time passes,
249 insns move from the "Queued" set to the "Ready" list. Insns may
250 move from the "Ready" list to the "Queued" set if they are blocked
251 due to a function unit conflict.
252
253 The "Pending" list (P) are the insns in the LOG_LINKS of the unscheduled
254 insns, i.e., those that are ready, queued, and pending.
255 The "Queued" set (Q) is implemented by the variable `insn_queue'.
256 The "Ready" list (R) is implemented by the variables `ready' and
257 `n_ready'.
258 The "Scheduled" list (S) is the new insn chain built by this pass.
259
260 The transition (R->S) is implemented in the scheduling loop in
261 `schedule_block' when the best insn to schedule is chosen.
262 The transition (R->Q) is implemented in `schedule_select' when an
263 insn is found to to have a function unit conflict with the already
264 committed insns.
265 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
266 insns move from the ready list to the scheduled list.
267 The transition (Q->R) is implemented at the top of the scheduling
268 loop in `schedule_block' as time passes or stalls are introduced. */
269
270 /* Implement a circular buffer to delay instructions until sufficient
271 time has passed. INSN_QUEUE_SIZE is a power of two larger than
272 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
273 longest time an isnsn may be queued. */
274 static rtx insn_queue[INSN_QUEUE_SIZE];
275 static int q_ptr = 0;
276 static int q_size = 0;
277 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
278 #define NEXT_Q_AFTER(X,C) (((X)+C) & (INSN_QUEUE_SIZE-1))
279
280 /* Vector indexed by INSN_UID giving the minimum clock tick at which
281 the insn becomes ready. This is used to note timing constraints for
282 insns in the pending list. */
283 static int *insn_tick;
284 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
285
286 /* Data structure for keeping track of register information
287 during that register's life. */
288
289 struct sometimes
290 {
291 short offset; short bit;
292 short live_length; short calls_crossed;
293 };
294
295 /* Forward declarations. */
296 static rtx canon_rtx PROTO((rtx));
297 static int rtx_equal_for_memref_p PROTO((rtx, rtx));
298 static rtx find_symbolic_term PROTO((rtx));
299 static int memrefs_conflict_p PROTO((int, rtx, int, rtx,
300 HOST_WIDE_INT));
301 static void add_dependence PROTO((rtx, rtx, enum reg_note));
302 static void remove_dependence PROTO((rtx, rtx));
303 static rtx find_insn_list PROTO((rtx, rtx));
304 static int insn_unit PROTO((rtx));
305 static unsigned int blockage_range PROTO((int, rtx));
306 static void clear_units PROTO((void));
307 static void prepare_unit PROTO((int));
308 static int actual_hazard_this_instance PROTO((int, int, rtx, int, int));
309 static void schedule_unit PROTO((int, rtx, int));
310 static int actual_hazard PROTO((int, rtx, int, int));
311 static int potential_hazard PROTO((int, rtx, int));
312 static int insn_cost PROTO((rtx, rtx, rtx));
313 static int priority PROTO((rtx));
314 static void free_pending_lists PROTO((void));
315 static void add_insn_mem_dependence PROTO((rtx *, rtx *, rtx, rtx));
316 static void flush_pending_lists PROTO((rtx, int));
317 static void sched_analyze_1 PROTO((rtx, rtx));
318 static void sched_analyze_2 PROTO((rtx, rtx));
319 static void sched_analyze_insn PROTO((rtx, rtx, rtx));
320 static int sched_analyze PROTO((rtx, rtx));
321 static void sched_note_set PROTO((int, rtx, int));
322 static int rank_for_schedule PROTO((rtx *, rtx *));
323 static void swap_sort PROTO((rtx *, int));
324 static void queue_insn PROTO((rtx, int));
325 static int birthing_insn PROTO((rtx));
326 static void adjust_priority PROTO((rtx));
327 static int schedule_insn PROTO((rtx, rtx *, int, int));
328 static int schedule_select PROTO((rtx *, int, int, FILE *));
329 static void create_reg_dead_note PROTO((rtx, rtx));
330 static void attach_deaths PROTO((rtx, rtx, int));
331 static void attach_deaths_insn PROTO((rtx));
332 static rtx unlink_notes PROTO((rtx, rtx));
333 static int new_sometimes_live PROTO((struct sometimes *, int, int,
334 int));
335 static void finish_sometimes_live PROTO((struct sometimes *, int));
336 static rtx reemit_notes PROTO((rtx, rtx));
337 static void schedule_block PROTO((int, FILE *));
338 static rtx regno_use_in PROTO((int, rtx));
339 static void split_hard_reg_notes PROTO((rtx, rtx, rtx, rtx));
340 static void new_insn_dead_notes PROTO((rtx, rtx, rtx, rtx));
341 static void update_n_sets PROTO((rtx, int));
342 static void update_flow_info PROTO((rtx, rtx, rtx, rtx));
343
344 /* Main entry point of this file. */
345 void schedule_insns PROTO((FILE *));
346
347 #endif /* INSN_SCHEDULING */
348 \f
349 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
350
351 /* Vector indexed by N giving the initial (unchanging) value known
352 for pseudo-register N. */
353 static rtx *reg_known_value;
354
355 /* Vector recording for each reg_known_value whether it is due to a
356 REG_EQUIV note. Future passes (viz., reload) may replace the
357 pseudo with the equivalent expression and so we account for the
358 dependences that would be introduced if that happens. */
359 /* ??? This is a problem only on the Convex. The REG_EQUIV notes created in
360 assign_parms mention the arg pointer, and there are explicit insns in the
361 RTL that modify the arg pointer. Thus we must ensure that such insns don't
362 get scheduled across each other because that would invalidate the REG_EQUIV
363 notes. One could argue that the REG_EQUIV notes are wrong, but solving
364 the problem in the scheduler will likely give better code, so we do it
365 here. */
366 static char *reg_known_equiv_p;
367
368 /* Indicates number of valid entries in reg_known_value. */
369 static int reg_known_value_size;
370
371 static rtx
372 canon_rtx (x)
373 rtx x;
374 {
375 /* Recursively look for equivalences. */
376 if (GET_CODE (x) == REG && REGNO (x) >= FIRST_PSEUDO_REGISTER
377 && REGNO (x) <= reg_known_value_size)
378 return reg_known_value[REGNO (x)] == x
379 ? x : canon_rtx (reg_known_value[REGNO (x)]);
380 else if (GET_CODE (x) == PLUS)
381 {
382 rtx x0 = canon_rtx (XEXP (x, 0));
383 rtx x1 = canon_rtx (XEXP (x, 1));
384
385 if (x0 != XEXP (x, 0) || x1 != XEXP (x, 1))
386 {
387 /* We can tolerate LO_SUMs being offset here; these
388 rtl are used for nothing other than comparisons. */
389 if (GET_CODE (x0) == CONST_INT)
390 return plus_constant_for_output (x1, INTVAL (x0));
391 else if (GET_CODE (x1) == CONST_INT)
392 return plus_constant_for_output (x0, INTVAL (x1));
393 return gen_rtx (PLUS, GET_MODE (x), x0, x1);
394 }
395 }
396 /* This gives us much better alias analysis when called from
397 the loop optimizer. Note we want to leave the original
398 MEM alone, but need to return the canonicalized MEM with
399 all the flags with their original values. */
400 else if (GET_CODE (x) == MEM)
401 {
402 rtx copy = copy_rtx (x);
403 XEXP (copy, 0) = canon_rtx (XEXP (copy, 0));
404 x = copy;
405 }
406 return x;
407 }
408
409 /* Set up all info needed to perform alias analysis on memory references. */
410
411 void
412 init_alias_analysis ()
413 {
414 int maxreg = max_reg_num ();
415 rtx insn;
416 rtx note;
417 rtx set;
418
419 reg_known_value_size = maxreg;
420
421 reg_known_value
422 = (rtx *) oballoc ((maxreg-FIRST_PSEUDO_REGISTER) * sizeof (rtx))
423 - FIRST_PSEUDO_REGISTER;
424 bzero ((char *) (reg_known_value + FIRST_PSEUDO_REGISTER),
425 (maxreg-FIRST_PSEUDO_REGISTER) * sizeof (rtx));
426
427 reg_known_equiv_p
428 = (char *) oballoc ((maxreg -FIRST_PSEUDO_REGISTER) * sizeof (char))
429 - FIRST_PSEUDO_REGISTER;
430 bzero (reg_known_equiv_p + FIRST_PSEUDO_REGISTER,
431 (maxreg - FIRST_PSEUDO_REGISTER) * sizeof (char));
432
433 /* Fill in the entries with known constant values. */
434 for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
435 if ((set = single_set (insn)) != 0
436 && GET_CODE (SET_DEST (set)) == REG
437 && REGNO (SET_DEST (set)) >= FIRST_PSEUDO_REGISTER
438 && (((note = find_reg_note (insn, REG_EQUAL, 0)) != 0
439 && reg_n_sets[REGNO (SET_DEST (set))] == 1)
440 || (note = find_reg_note (insn, REG_EQUIV, NULL_RTX)) != 0)
441 && GET_CODE (XEXP (note, 0)) != EXPR_LIST)
442 {
443 int regno = REGNO (SET_DEST (set));
444 reg_known_value[regno] = XEXP (note, 0);
445 reg_known_equiv_p[regno] = REG_NOTE_KIND (note) == REG_EQUIV;
446 }
447
448 /* Fill in the remaining entries. */
449 while (--maxreg >= FIRST_PSEUDO_REGISTER)
450 if (reg_known_value[maxreg] == 0)
451 reg_known_value[maxreg] = regno_reg_rtx[maxreg];
452 }
453
454 /* Return 1 if X and Y are identical-looking rtx's.
455
456 We use the data in reg_known_value above to see if two registers with
457 different numbers are, in fact, equivalent. */
458
459 static int
460 rtx_equal_for_memref_p (x, y)
461 rtx x, y;
462 {
463 register int i;
464 register int j;
465 register enum rtx_code code;
466 register char *fmt;
467
468 if (x == 0 && y == 0)
469 return 1;
470 if (x == 0 || y == 0)
471 return 0;
472 x = canon_rtx (x);
473 y = canon_rtx (y);
474
475 if (x == y)
476 return 1;
477
478 code = GET_CODE (x);
479 /* Rtx's of different codes cannot be equal. */
480 if (code != GET_CODE (y))
481 return 0;
482
483 /* (MULT:SI x y) and (MULT:HI x y) are NOT equivalent.
484 (REG:SI x) and (REG:HI x) are NOT equivalent. */
485
486 if (GET_MODE (x) != GET_MODE (y))
487 return 0;
488
489 /* REG, LABEL_REF, and SYMBOL_REF can be compared nonrecursively. */
490
491 if (code == REG)
492 return REGNO (x) == REGNO (y);
493 if (code == LABEL_REF)
494 return XEXP (x, 0) == XEXP (y, 0);
495 if (code == SYMBOL_REF)
496 return XSTR (x, 0) == XSTR (y, 0);
497
498 /* For commutative operations, the RTX match if the operand match in any
499 order. Also handle the simple binary and unary cases without a loop. */
500 if (code == EQ || code == NE || GET_RTX_CLASS (code) == 'c')
501 return ((rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0))
502 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 1)))
503 || (rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 1))
504 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 0))));
505 else if (GET_RTX_CLASS (code) == '<' || GET_RTX_CLASS (code) == '2')
506 return (rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0))
507 && rtx_equal_for_memref_p (XEXP (x, 1), XEXP (y, 1)));
508 else if (GET_RTX_CLASS (code) == '1')
509 return rtx_equal_for_memref_p (XEXP (x, 0), XEXP (y, 0));
510
511 /* Compare the elements. If any pair of corresponding elements
512 fail to match, return 0 for the whole things. */
513
514 fmt = GET_RTX_FORMAT (code);
515 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
516 {
517 switch (fmt[i])
518 {
519 case 'w':
520 if (XWINT (x, i) != XWINT (y, i))
521 return 0;
522 break;
523
524 case 'n':
525 case 'i':
526 if (XINT (x, i) != XINT (y, i))
527 return 0;
528 break;
529
530 case 'V':
531 case 'E':
532 /* Two vectors must have the same length. */
533 if (XVECLEN (x, i) != XVECLEN (y, i))
534 return 0;
535
536 /* And the corresponding elements must match. */
537 for (j = 0; j < XVECLEN (x, i); j++)
538 if (rtx_equal_for_memref_p (XVECEXP (x, i, j), XVECEXP (y, i, j)) == 0)
539 return 0;
540 break;
541
542 case 'e':
543 if (rtx_equal_for_memref_p (XEXP (x, i), XEXP (y, i)) == 0)
544 return 0;
545 break;
546
547 case 'S':
548 case 's':
549 if (strcmp (XSTR (x, i), XSTR (y, i)))
550 return 0;
551 break;
552
553 case 'u':
554 /* These are just backpointers, so they don't matter. */
555 break;
556
557 case '0':
558 break;
559
560 /* It is believed that rtx's at this level will never
561 contain anything but integers and other rtx's,
562 except for within LABEL_REFs and SYMBOL_REFs. */
563 default:
564 abort ();
565 }
566 }
567 return 1;
568 }
569
570 /* Given an rtx X, find a SYMBOL_REF or LABEL_REF within
571 X and return it, or return 0 if none found. */
572
573 static rtx
574 find_symbolic_term (x)
575 rtx x;
576 {
577 register int i;
578 register enum rtx_code code;
579 register char *fmt;
580
581 code = GET_CODE (x);
582 if (code == SYMBOL_REF || code == LABEL_REF)
583 return x;
584 if (GET_RTX_CLASS (code) == 'o')
585 return 0;
586
587 fmt = GET_RTX_FORMAT (code);
588 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
589 {
590 rtx t;
591
592 if (fmt[i] == 'e')
593 {
594 t = find_symbolic_term (XEXP (x, i));
595 if (t != 0)
596 return t;
597 }
598 else if (fmt[i] == 'E')
599 break;
600 }
601 return 0;
602 }
603
604 /* Return nonzero if X and Y (memory addresses) could reference the
605 same location in memory. C is an offset accumulator. When
606 C is nonzero, we are testing aliases between X and Y + C.
607 XSIZE is the size in bytes of the X reference,
608 similarly YSIZE is the size in bytes for Y.
609
610 If XSIZE or YSIZE is zero, we do not know the amount of memory being
611 referenced (the reference was BLKmode), so make the most pessimistic
612 assumptions.
613
614 We recognize the following cases of non-conflicting memory:
615
616 (1) addresses involving the frame pointer cannot conflict
617 with addresses involving static variables.
618 (2) static variables with different addresses cannot conflict.
619
620 Nice to notice that varying addresses cannot conflict with fp if no
621 local variables had their addresses taken, but that's too hard now. */
622
623 /* ??? In Fortran, references to a array parameter can never conflict with
624 another array parameter. */
625
626 static int
627 memrefs_conflict_p (xsize, x, ysize, y, c)
628 rtx x, y;
629 int xsize, ysize;
630 HOST_WIDE_INT c;
631 {
632 if (GET_CODE (x) == HIGH)
633 x = XEXP (x, 0);
634 else if (GET_CODE (x) == LO_SUM)
635 x = XEXP (x, 1);
636 else
637 x = canon_rtx (x);
638 if (GET_CODE (y) == HIGH)
639 y = XEXP (y, 0);
640 else if (GET_CODE (y) == LO_SUM)
641 y = XEXP (y, 1);
642 else
643 y = canon_rtx (y);
644
645 if (rtx_equal_for_memref_p (x, y))
646 return (xsize == 0 || ysize == 0 ||
647 (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
648
649 if (y == frame_pointer_rtx || y == hard_frame_pointer_rtx
650 || y == stack_pointer_rtx)
651 {
652 rtx t = y;
653 int tsize = ysize;
654 y = x; ysize = xsize;
655 x = t; xsize = tsize;
656 }
657
658 if (x == frame_pointer_rtx || x == hard_frame_pointer_rtx
659 || x == stack_pointer_rtx)
660 {
661 rtx y1;
662
663 if (CONSTANT_P (y))
664 return 0;
665
666 if (GET_CODE (y) == PLUS
667 && canon_rtx (XEXP (y, 0)) == x
668 && (y1 = canon_rtx (XEXP (y, 1)))
669 && GET_CODE (y1) == CONST_INT)
670 {
671 c += INTVAL (y1);
672 return (xsize == 0 || ysize == 0
673 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
674 }
675
676 if (GET_CODE (y) == PLUS
677 && (y1 = canon_rtx (XEXP (y, 0)))
678 && CONSTANT_P (y1))
679 return 0;
680
681 return 1;
682 }
683
684 if (GET_CODE (x) == PLUS)
685 {
686 /* The fact that X is canonicalized means that this
687 PLUS rtx is canonicalized. */
688 rtx x0 = XEXP (x, 0);
689 rtx x1 = XEXP (x, 1);
690
691 if (GET_CODE (y) == PLUS)
692 {
693 /* The fact that Y is canonicalized means that this
694 PLUS rtx is canonicalized. */
695 rtx y0 = XEXP (y, 0);
696 rtx y1 = XEXP (y, 1);
697
698 if (rtx_equal_for_memref_p (x1, y1))
699 return memrefs_conflict_p (xsize, x0, ysize, y0, c);
700 if (rtx_equal_for_memref_p (x0, y0))
701 return memrefs_conflict_p (xsize, x1, ysize, y1, c);
702 if (GET_CODE (x1) == CONST_INT)
703 if (GET_CODE (y1) == CONST_INT)
704 return memrefs_conflict_p (xsize, x0, ysize, y0,
705 c - INTVAL (x1) + INTVAL (y1));
706 else
707 return memrefs_conflict_p (xsize, x0, ysize, y, c - INTVAL (x1));
708 else if (GET_CODE (y1) == CONST_INT)
709 return memrefs_conflict_p (xsize, x, ysize, y0, c + INTVAL (y1));
710
711 /* Handle case where we cannot understand iteration operators,
712 but we notice that the base addresses are distinct objects. */
713 x = find_symbolic_term (x);
714 if (x == 0)
715 return 1;
716 y = find_symbolic_term (y);
717 if (y == 0)
718 return 1;
719 return rtx_equal_for_memref_p (x, y);
720 }
721 else if (GET_CODE (x1) == CONST_INT)
722 return memrefs_conflict_p (xsize, x0, ysize, y, c - INTVAL (x1));
723 }
724 else if (GET_CODE (y) == PLUS)
725 {
726 /* The fact that Y is canonicalized means that this
727 PLUS rtx is canonicalized. */
728 rtx y0 = XEXP (y, 0);
729 rtx y1 = XEXP (y, 1);
730
731 if (GET_CODE (y1) == CONST_INT)
732 return memrefs_conflict_p (xsize, x, ysize, y0, c + INTVAL (y1));
733 else
734 return 1;
735 }
736
737 if (GET_CODE (x) == GET_CODE (y))
738 switch (GET_CODE (x))
739 {
740 case MULT:
741 {
742 /* Handle cases where we expect the second operands to be the
743 same, and check only whether the first operand would conflict
744 or not. */
745 rtx x0, y0;
746 rtx x1 = canon_rtx (XEXP (x, 1));
747 rtx y1 = canon_rtx (XEXP (y, 1));
748 if (! rtx_equal_for_memref_p (x1, y1))
749 return 1;
750 x0 = canon_rtx (XEXP (x, 0));
751 y0 = canon_rtx (XEXP (y, 0));
752 if (rtx_equal_for_memref_p (x0, y0))
753 return (xsize == 0 || ysize == 0
754 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
755
756 /* Can't properly adjust our sizes. */
757 if (GET_CODE (x1) != CONST_INT)
758 return 1;
759 xsize /= INTVAL (x1);
760 ysize /= INTVAL (x1);
761 c /= INTVAL (x1);
762 return memrefs_conflict_p (xsize, x0, ysize, y0, c);
763 }
764 }
765
766 if (CONSTANT_P (x))
767 {
768 if (GET_CODE (x) == CONST_INT && GET_CODE (y) == CONST_INT)
769 {
770 c += (INTVAL (y) - INTVAL (x));
771 return (xsize == 0 || ysize == 0
772 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0));
773 }
774
775 if (GET_CODE (x) == CONST)
776 {
777 if (GET_CODE (y) == CONST)
778 return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)),
779 ysize, canon_rtx (XEXP (y, 0)), c);
780 else
781 return memrefs_conflict_p (xsize, canon_rtx (XEXP (x, 0)),
782 ysize, y, c);
783 }
784 if (GET_CODE (y) == CONST)
785 return memrefs_conflict_p (xsize, x, ysize,
786 canon_rtx (XEXP (y, 0)), c);
787
788 if (CONSTANT_P (y))
789 return (rtx_equal_for_memref_p (x, y)
790 && (xsize == 0 || ysize == 0
791 || (c >= 0 && xsize > c) || (c < 0 && ysize+c > 0)));
792
793 return 1;
794 }
795 return 1;
796 }
797
798 /* Functions to compute memory dependencies.
799
800 Since we process the insns in execution order, we can build tables
801 to keep track of what registers are fixed (and not aliased), what registers
802 are varying in known ways, and what registers are varying in unknown
803 ways.
804
805 If both memory references are volatile, then there must always be a
806 dependence between the two references, since their order can not be
807 changed. A volatile and non-volatile reference can be interchanged
808 though.
809
810 A MEM_IN_STRUCT reference at a non-QImode varying address can never
811 conflict with a non-MEM_IN_STRUCT reference at a fixed address. We must
812 allow QImode aliasing because the ANSI C standard allows character
813 pointers to alias anything. We are assuming that characters are
814 always QImode here. */
815
816 /* Read dependence: X is read after read in MEM takes place. There can
817 only be a dependence here if both reads are volatile. */
818
819 int
820 read_dependence (mem, x)
821 rtx mem;
822 rtx x;
823 {
824 return MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem);
825 }
826
827 /* True dependence: X is read after store in MEM takes place. */
828
829 int
830 true_dependence (mem, x)
831 rtx mem;
832 rtx x;
833 {
834 /* If X is an unchanging read, then it can't possibly conflict with any
835 non-unchanging store. It may conflict with an unchanging write though,
836 because there may be a single store to this address to initialize it.
837 Just fall through to the code below to resolve the case where we have
838 both an unchanging read and an unchanging write. This won't handle all
839 cases optimally, but the possible performance loss should be
840 negligible. */
841 x = canon_rtx (x);
842 mem = canon_rtx (mem);
843 if (RTX_UNCHANGING_P (x) && ! RTX_UNCHANGING_P (mem))
844 return 0;
845
846 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
847 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
848 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
849 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
850 && GET_MODE (mem) != QImode
851 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
852 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
853 && GET_MODE (x) != QImode
854 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
855 }
856
857 /* Anti dependence: X is written after read in MEM takes place. */
858
859 int
860 anti_dependence (mem, x)
861 rtx mem;
862 rtx x;
863 {
864 /* If MEM is an unchanging read, then it can't possibly conflict with
865 the store to X, because there is at most one store to MEM, and it must
866 have occurred somewhere before MEM. */
867 x = canon_rtx (x);
868 mem = canon_rtx (mem);
869 if (RTX_UNCHANGING_P (mem))
870 return 0;
871
872 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
873 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
874 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
875 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
876 && GET_MODE (mem) != QImode
877 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
878 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
879 && GET_MODE (x) != QImode
880 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
881 }
882
883 /* Output dependence: X is written after store in MEM takes place. */
884
885 int
886 output_dependence (mem, x)
887 rtx mem;
888 rtx x;
889 {
890 x = canon_rtx (x);
891 mem = canon_rtx (mem);
892 return ((MEM_VOLATILE_P (x) && MEM_VOLATILE_P (mem))
893 || (memrefs_conflict_p (SIZE_FOR_MODE (mem), XEXP (mem, 0),
894 SIZE_FOR_MODE (x), XEXP (x, 0), 0)
895 && ! (MEM_IN_STRUCT_P (mem) && rtx_addr_varies_p (mem)
896 && GET_MODE (mem) != QImode
897 && ! MEM_IN_STRUCT_P (x) && ! rtx_addr_varies_p (x))
898 && ! (MEM_IN_STRUCT_P (x) && rtx_addr_varies_p (x)
899 && GET_MODE (x) != QImode
900 && ! MEM_IN_STRUCT_P (mem) && ! rtx_addr_varies_p (mem))));
901 }
902 \f
903 /* Helper functions for instruction scheduling. */
904
905 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
906 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
907 of dependence that this link represents. */
908
909 static void
910 add_dependence (insn, elem, dep_type)
911 rtx insn;
912 rtx elem;
913 enum reg_note dep_type;
914 {
915 rtx link, next;
916
917 /* Don't depend an insn on itself. */
918 if (insn == elem)
919 return;
920
921 /* If elem is part of a sequence that must be scheduled together, then
922 make the dependence point to the last insn of the sequence.
923 When HAVE_cc0, it is possible for NOTEs to exist between users and
924 setters of the condition codes, so we must skip past notes here.
925 Otherwise, NOTEs are impossible here. */
926
927 next = NEXT_INSN (elem);
928
929 #ifdef HAVE_cc0
930 while (next && GET_CODE (next) == NOTE)
931 next = NEXT_INSN (next);
932 #endif
933
934 if (next && SCHED_GROUP_P (next)
935 && GET_CODE (next) != CODE_LABEL)
936 {
937 /* Notes will never intervene here though, so don't bother checking
938 for them. */
939 /* We must reject CODE_LABELs, so that we don't get confused by one
940 that has LABEL_PRESERVE_P set, which is represented by the same
941 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
942 SCHED_GROUP_P. */
943 while (NEXT_INSN (next) && SCHED_GROUP_P (NEXT_INSN (next))
944 && GET_CODE (NEXT_INSN (next)) != CODE_LABEL)
945 next = NEXT_INSN (next);
946
947 /* Again, don't depend an insn on itself. */
948 if (insn == next)
949 return;
950
951 /* Make the dependence to NEXT, the last insn of the group, instead
952 of the original ELEM. */
953 elem = next;
954 }
955
956 /* Check that we don't already have this dependence. */
957 for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
958 if (XEXP (link, 0) == elem)
959 {
960 /* If this is a more restrictive type of dependence than the existing
961 one, then change the existing dependence to this type. */
962 if ((int) dep_type < (int) REG_NOTE_KIND (link))
963 PUT_REG_NOTE_KIND (link, dep_type);
964 return;
965 }
966 /* Might want to check one level of transitivity to save conses. */
967
968 link = rtx_alloc (INSN_LIST);
969 /* Insn dependency, not data dependency. */
970 PUT_REG_NOTE_KIND (link, dep_type);
971 XEXP (link, 0) = elem;
972 XEXP (link, 1) = LOG_LINKS (insn);
973 LOG_LINKS (insn) = link;
974 }
975
976 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
977 of INSN. Abort if not found. */
978
979 static void
980 remove_dependence (insn, elem)
981 rtx insn;
982 rtx elem;
983 {
984 rtx prev, link;
985 int found = 0;
986
987 for (prev = 0, link = LOG_LINKS (insn); link;
988 prev = link, link = XEXP (link, 1))
989 {
990 if (XEXP (link, 0) == elem)
991 {
992 if (prev)
993 XEXP (prev, 1) = XEXP (link, 1);
994 else
995 LOG_LINKS (insn) = XEXP (link, 1);
996 found = 1;
997 }
998 }
999
1000 if (! found)
1001 abort ();
1002 return;
1003 }
1004 \f
1005 #ifndef INSN_SCHEDULING
1006 void
1007 schedule_insns (dump_file)
1008 FILE *dump_file;
1009 {
1010 }
1011 #else
1012 #ifndef __GNUC__
1013 #define __inline
1014 #endif
1015
1016 /* Computation of memory dependencies. */
1017
1018 /* The *_insns and *_mems are paired lists. Each pending memory operation
1019 will have a pointer to the MEM rtx on one list and a pointer to the
1020 containing insn on the other list in the same place in the list. */
1021
1022 /* We can't use add_dependence like the old code did, because a single insn
1023 may have multiple memory accesses, and hence needs to be on the list
1024 once for each memory access. Add_dependence won't let you add an insn
1025 to a list more than once. */
1026
1027 /* An INSN_LIST containing all insns with pending read operations. */
1028 static rtx pending_read_insns;
1029
1030 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
1031 static rtx pending_read_mems;
1032
1033 /* An INSN_LIST containing all insns with pending write operations. */
1034 static rtx pending_write_insns;
1035
1036 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
1037 static rtx pending_write_mems;
1038
1039 /* Indicates the combined length of the two pending lists. We must prevent
1040 these lists from ever growing too large since the number of dependencies
1041 produced is at least O(N*N), and execution time is at least O(4*N*N), as
1042 a function of the length of these pending lists. */
1043
1044 static int pending_lists_length;
1045
1046 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
1047
1048 static rtx unused_insn_list;
1049
1050 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
1051
1052 static rtx unused_expr_list;
1053
1054 /* The last insn upon which all memory references must depend.
1055 This is an insn which flushed the pending lists, creating a dependency
1056 between it and all previously pending memory references. This creates
1057 a barrier (or a checkpoint) which no memory reference is allowed to cross.
1058
1059 This includes all non constant CALL_INSNs. When we do interprocedural
1060 alias analysis, this restriction can be relaxed.
1061 This may also be an INSN that writes memory if the pending lists grow
1062 too large. */
1063
1064 static rtx last_pending_memory_flush;
1065
1066 /* The last function call we have seen. All hard regs, and, of course,
1067 the last function call, must depend on this. */
1068
1069 static rtx last_function_call;
1070
1071 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
1072 that does not already cross a call. We create dependencies between each
1073 of those insn and the next call insn, to ensure that they won't cross a call
1074 after scheduling is done. */
1075
1076 static rtx sched_before_next_call;
1077
1078 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
1079 so that insns independent of the last scheduled insn will be preferred
1080 over dependent instructions. */
1081
1082 static rtx last_scheduled_insn;
1083
1084 /* Process an insn's memory dependencies. There are four kinds of
1085 dependencies:
1086
1087 (0) read dependence: read follows read
1088 (1) true dependence: read follows write
1089 (2) anti dependence: write follows read
1090 (3) output dependence: write follows write
1091
1092 We are careful to build only dependencies which actually exist, and
1093 use transitivity to avoid building too many links. */
1094 \f
1095 /* Return the INSN_LIST containing INSN in LIST, or NULL
1096 if LIST does not contain INSN. */
1097
1098 __inline static rtx
1099 find_insn_list (insn, list)
1100 rtx insn;
1101 rtx list;
1102 {
1103 while (list)
1104 {
1105 if (XEXP (list, 0) == insn)
1106 return list;
1107 list = XEXP (list, 1);
1108 }
1109 return 0;
1110 }
1111
1112 /* Compute the function units used by INSN. This caches the value
1113 returned by function_units_used. A function unit is encoded as the
1114 unit number if the value is non-negative and the compliment of a
1115 mask if the value is negative. A function unit index is the
1116 non-negative encoding. */
1117
1118 __inline static int
1119 insn_unit (insn)
1120 rtx insn;
1121 {
1122 register int unit = INSN_UNIT (insn);
1123
1124 if (unit == 0)
1125 {
1126 recog_memoized (insn);
1127
1128 /* A USE insn, or something else we don't need to understand.
1129 We can't pass these directly to function_units_used because it will
1130 trigger a fatal error for unrecognizable insns. */
1131 if (INSN_CODE (insn) < 0)
1132 unit = -1;
1133 else
1134 {
1135 unit = function_units_used (insn);
1136 /* Increment non-negative values so we can cache zero. */
1137 if (unit >= 0) unit++;
1138 }
1139 /* We only cache 16 bits of the result, so if the value is out of
1140 range, don't cache it. */
1141 if (FUNCTION_UNITS_SIZE < HOST_BITS_PER_SHORT
1142 || unit >= 0
1143 || (~unit & ((1 << (HOST_BITS_PER_SHORT - 1)) - 1)) == 0)
1144 INSN_UNIT (insn) = unit;
1145 }
1146 return (unit > 0 ? unit - 1 : unit);
1147 }
1148
1149 /* Compute the blockage range for executing INSN on UNIT. This caches
1150 the value returned by the blockage_range_function for the unit.
1151 These values are encoded in an int where the upper half gives the
1152 minimum value and the lower half gives the maximum value. */
1153
1154 __inline static unsigned int
1155 blockage_range (unit, insn)
1156 int unit;
1157 rtx insn;
1158 {
1159 unsigned int blockage = INSN_BLOCKAGE (insn);
1160 unsigned int range;
1161
1162 if (UNIT_BLOCKED (blockage) != unit + 1)
1163 {
1164 range = function_units[unit].blockage_range_function (insn);
1165 /* We only cache the blockage range for one unit and then only if
1166 the values fit. */
1167 if (HOST_BITS_PER_INT >= UNIT_BITS + 2 * BLOCKAGE_BITS)
1168 INSN_BLOCKAGE (insn) = ENCODE_BLOCKAGE (unit + 1, range);
1169 }
1170 else
1171 range = BLOCKAGE_RANGE (blockage);
1172
1173 return range;
1174 }
1175
1176 /* A vector indexed by function unit instance giving the last insn to use
1177 the unit. The value of the function unit instance index for unit U
1178 instance I is (U + I * FUNCTION_UNITS_SIZE). */
1179 static rtx unit_last_insn[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
1180
1181 /* A vector indexed by function unit instance giving the minimum time when
1182 the unit will unblock based on the maximum blockage cost. */
1183 static int unit_tick[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
1184
1185 /* A vector indexed by function unit number giving the number of insns
1186 that remain to use the unit. */
1187 static int unit_n_insns[FUNCTION_UNITS_SIZE];
1188
1189 /* Reset the function unit state to the null state. */
1190
1191 static void
1192 clear_units ()
1193 {
1194 bzero ((char *) unit_last_insn, sizeof (unit_last_insn));
1195 bzero ((char *) unit_tick, sizeof (unit_tick));
1196 bzero ((char *) unit_n_insns, sizeof (unit_n_insns));
1197 }
1198
1199 /* Record an insn as one that will use the units encoded by UNIT. */
1200
1201 __inline static void
1202 prepare_unit (unit)
1203 int unit;
1204 {
1205 int i;
1206
1207 if (unit >= 0)
1208 unit_n_insns[unit]++;
1209 else
1210 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1211 if ((unit & 1) != 0)
1212 prepare_unit (i);
1213 }
1214
1215 /* Return the actual hazard cost of executing INSN on the unit UNIT,
1216 instance INSTANCE at time CLOCK if the previous actual hazard cost
1217 was COST. */
1218
1219 __inline static int
1220 actual_hazard_this_instance (unit, instance, insn, clock, cost)
1221 int unit, instance, clock, cost;
1222 rtx insn;
1223 {
1224 int tick = unit_tick[instance];
1225
1226 if (tick - clock > cost)
1227 {
1228 /* The scheduler is operating in reverse, so INSN is the executing
1229 insn and the unit's last insn is the candidate insn. We want a
1230 more exact measure of the blockage if we execute INSN at CLOCK
1231 given when we committed the execution of the unit's last insn.
1232
1233 The blockage value is given by either the unit's max blockage
1234 constant, blockage range function, or blockage function. Use
1235 the most exact form for the given unit. */
1236
1237 if (function_units[unit].blockage_range_function)
1238 {
1239 if (function_units[unit].blockage_function)
1240 tick += (function_units[unit].blockage_function
1241 (insn, unit_last_insn[instance])
1242 - function_units[unit].max_blockage);
1243 else
1244 tick += ((int) MAX_BLOCKAGE_COST (blockage_range (unit, insn))
1245 - function_units[unit].max_blockage);
1246 }
1247 if (tick - clock > cost)
1248 cost = tick - clock;
1249 }
1250 return cost;
1251 }
1252
1253 /* Record INSN as having begun execution on the units encoded by UNIT at
1254 time CLOCK. */
1255
1256 __inline static void
1257 schedule_unit (unit, insn, clock)
1258 int unit, clock;
1259 rtx insn;
1260 {
1261 int i;
1262
1263 if (unit >= 0)
1264 {
1265 int instance = unit;
1266 #if MAX_MULTIPLICITY > 1
1267 /* Find the first free instance of the function unit and use that
1268 one. We assume that one is free. */
1269 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
1270 {
1271 if (! actual_hazard_this_instance (unit, instance, insn, clock, 0))
1272 break;
1273 instance += FUNCTION_UNITS_SIZE;
1274 }
1275 #endif
1276 unit_last_insn[instance] = insn;
1277 unit_tick[instance] = (clock + function_units[unit].max_blockage);
1278 }
1279 else
1280 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1281 if ((unit & 1) != 0)
1282 schedule_unit (i, insn, clock);
1283 }
1284
1285 /* Return the actual hazard cost of executing INSN on the units encoded by
1286 UNIT at time CLOCK if the previous actual hazard cost was COST. */
1287
1288 __inline static int
1289 actual_hazard (unit, insn, clock, cost)
1290 int unit, clock, cost;
1291 rtx insn;
1292 {
1293 int i;
1294
1295 if (unit >= 0)
1296 {
1297 /* Find the instance of the function unit with the minimum hazard. */
1298 int instance = unit;
1299 int best_cost = actual_hazard_this_instance (unit, instance, insn,
1300 clock, cost);
1301 int this_cost;
1302
1303 #if MAX_MULTIPLICITY > 1
1304 if (best_cost > cost)
1305 {
1306 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
1307 {
1308 instance += FUNCTION_UNITS_SIZE;
1309 this_cost = actual_hazard_this_instance (unit, instance, insn,
1310 clock, cost);
1311 if (this_cost < best_cost)
1312 {
1313 best_cost = this_cost;
1314 if (this_cost <= cost)
1315 break;
1316 }
1317 }
1318 }
1319 #endif
1320 cost = MAX (cost, best_cost);
1321 }
1322 else
1323 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1324 if ((unit & 1) != 0)
1325 cost = actual_hazard (i, insn, clock, cost);
1326
1327 return cost;
1328 }
1329
1330 /* Return the potential hazard cost of executing an instruction on the
1331 units encoded by UNIT if the previous potential hazard cost was COST.
1332 An insn with a large blockage time is chosen in preference to one
1333 with a smaller time; an insn that uses a unit that is more likely
1334 to be used is chosen in preference to one with a unit that is less
1335 used. We are trying to minimize a subsequent actual hazard. */
1336
1337 __inline static int
1338 potential_hazard (unit, insn, cost)
1339 int unit, cost;
1340 rtx insn;
1341 {
1342 int i, ncost;
1343 unsigned int minb, maxb;
1344
1345 if (unit >= 0)
1346 {
1347 minb = maxb = function_units[unit].max_blockage;
1348 if (maxb > 1)
1349 {
1350 if (function_units[unit].blockage_range_function)
1351 {
1352 maxb = minb = blockage_range (unit, insn);
1353 maxb = MAX_BLOCKAGE_COST (maxb);
1354 minb = MIN_BLOCKAGE_COST (minb);
1355 }
1356
1357 if (maxb > 1)
1358 {
1359 /* Make the number of instructions left dominate. Make the
1360 minimum delay dominate the maximum delay. If all these
1361 are the same, use the unit number to add an arbitrary
1362 ordering. Other terms can be added. */
1363 ncost = minb * 0x40 + maxb;
1364 ncost *= (unit_n_insns[unit] - 1) * 0x1000 + unit;
1365 if (ncost > cost)
1366 cost = ncost;
1367 }
1368 }
1369 }
1370 else
1371 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
1372 if ((unit & 1) != 0)
1373 cost = potential_hazard (i, insn, cost);
1374
1375 return cost;
1376 }
1377
1378 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
1379 This is the number of virtual cycles taken between instruction issue and
1380 instruction results. */
1381
1382 __inline static int
1383 insn_cost (insn, link, used)
1384 rtx insn, link, used;
1385 {
1386 register int cost = INSN_COST (insn);
1387
1388 if (cost == 0)
1389 {
1390 recog_memoized (insn);
1391
1392 /* A USE insn, or something else we don't need to understand.
1393 We can't pass these directly to result_ready_cost because it will
1394 trigger a fatal error for unrecognizable insns. */
1395 if (INSN_CODE (insn) < 0)
1396 {
1397 INSN_COST (insn) = 1;
1398 return 1;
1399 }
1400 else
1401 {
1402 cost = result_ready_cost (insn);
1403
1404 if (cost < 1)
1405 cost = 1;
1406
1407 INSN_COST (insn) = cost;
1408 }
1409 }
1410
1411 /* A USE insn should never require the value used to be computed. This
1412 allows the computation of a function's result and parameter values to
1413 overlap the return and call. */
1414 recog_memoized (used);
1415 if (INSN_CODE (used) < 0)
1416 LINK_COST_FREE (link) = 1;
1417
1418 /* If some dependencies vary the cost, compute the adjustment. Most
1419 commonly, the adjustment is complete: either the cost is ignored
1420 (in the case of an output- or anti-dependence), or the cost is
1421 unchanged. These values are cached in the link as LINK_COST_FREE
1422 and LINK_COST_ZERO. */
1423
1424 if (LINK_COST_FREE (link))
1425 cost = 1;
1426 #ifdef ADJUST_COST
1427 else if (! LINK_COST_ZERO (link))
1428 {
1429 int ncost = cost;
1430
1431 ADJUST_COST (used, link, insn, ncost);
1432 if (ncost <= 1)
1433 LINK_COST_FREE (link) = ncost = 1;
1434 if (cost == ncost)
1435 LINK_COST_ZERO (link) = 1;
1436 cost = ncost;
1437 }
1438 #endif
1439 return cost;
1440 }
1441
1442 /* Compute the priority number for INSN. */
1443
1444 static int
1445 priority (insn)
1446 rtx insn;
1447 {
1448 if (insn && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
1449 {
1450 int prev_priority;
1451 int max_priority;
1452 int this_priority = INSN_PRIORITY (insn);
1453 rtx prev;
1454
1455 if (this_priority > 0)
1456 return this_priority;
1457
1458 max_priority = 1;
1459
1460 /* Nonzero if these insns must be scheduled together. */
1461 if (SCHED_GROUP_P (insn))
1462 {
1463 prev = insn;
1464 while (SCHED_GROUP_P (prev))
1465 {
1466 prev = PREV_INSN (prev);
1467 INSN_REF_COUNT (prev) += 1;
1468 }
1469 }
1470
1471 for (prev = LOG_LINKS (insn); prev; prev = XEXP (prev, 1))
1472 {
1473 rtx x = XEXP (prev, 0);
1474
1475 /* A dependence pointing to a note or deleted insn is always
1476 obsolete, because sched_analyze_insn will have created any
1477 necessary new dependences which replace it. Notes and deleted
1478 insns can be created when instructions are deleted by insn
1479 splitting, or by register allocation. */
1480 if (GET_CODE (x) == NOTE || INSN_DELETED_P (x))
1481 {
1482 remove_dependence (insn, x);
1483 continue;
1484 }
1485
1486 /* Clear the link cost adjustment bits. */
1487 LINK_COST_FREE (prev) = 0;
1488 #ifdef ADJUST_COST
1489 LINK_COST_ZERO (prev) = 0;
1490 #endif
1491
1492 /* This priority calculation was chosen because it results in the
1493 least instruction movement, and does not hurt the performance
1494 of the resulting code compared to the old algorithm.
1495 This makes the sched algorithm more stable, which results
1496 in better code, because there is less register pressure,
1497 cross jumping is more likely to work, and debugging is easier.
1498
1499 When all instructions have a latency of 1, there is no need to
1500 move any instructions. Subtracting one here ensures that in such
1501 cases all instructions will end up with a priority of one, and
1502 hence no scheduling will be done.
1503
1504 The original code did not subtract the one, and added the
1505 insn_cost of the current instruction to its priority (e.g.
1506 move the insn_cost call down to the end). */
1507
1508 prev_priority = priority (x) + insn_cost (x, prev, insn) - 1;
1509
1510 if (prev_priority > max_priority)
1511 max_priority = prev_priority;
1512 INSN_REF_COUNT (x) += 1;
1513 }
1514
1515 prepare_unit (insn_unit (insn));
1516 INSN_PRIORITY (insn) = max_priority;
1517 return INSN_PRIORITY (insn);
1518 }
1519 return 0;
1520 }
1521 \f
1522 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
1523 them to the unused_*_list variables, so that they can be reused. */
1524
1525 static void
1526 free_pending_lists ()
1527 {
1528 register rtx link, prev_link;
1529
1530 if (pending_read_insns)
1531 {
1532 prev_link = pending_read_insns;
1533 link = XEXP (prev_link, 1);
1534
1535 while (link)
1536 {
1537 prev_link = link;
1538 link = XEXP (link, 1);
1539 }
1540
1541 XEXP (prev_link, 1) = unused_insn_list;
1542 unused_insn_list = pending_read_insns;
1543 pending_read_insns = 0;
1544 }
1545
1546 if (pending_write_insns)
1547 {
1548 prev_link = pending_write_insns;
1549 link = XEXP (prev_link, 1);
1550
1551 while (link)
1552 {
1553 prev_link = link;
1554 link = XEXP (link, 1);
1555 }
1556
1557 XEXP (prev_link, 1) = unused_insn_list;
1558 unused_insn_list = pending_write_insns;
1559 pending_write_insns = 0;
1560 }
1561
1562 if (pending_read_mems)
1563 {
1564 prev_link = pending_read_mems;
1565 link = XEXP (prev_link, 1);
1566
1567 while (link)
1568 {
1569 prev_link = link;
1570 link = XEXP (link, 1);
1571 }
1572
1573 XEXP (prev_link, 1) = unused_expr_list;
1574 unused_expr_list = pending_read_mems;
1575 pending_read_mems = 0;
1576 }
1577
1578 if (pending_write_mems)
1579 {
1580 prev_link = pending_write_mems;
1581 link = XEXP (prev_link, 1);
1582
1583 while (link)
1584 {
1585 prev_link = link;
1586 link = XEXP (link, 1);
1587 }
1588
1589 XEXP (prev_link, 1) = unused_expr_list;
1590 unused_expr_list = pending_write_mems;
1591 pending_write_mems = 0;
1592 }
1593 }
1594
1595 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
1596 The MEM is a memory reference contained within INSN, which we are saving
1597 so that we can do memory aliasing on it. */
1598
1599 static void
1600 add_insn_mem_dependence (insn_list, mem_list, insn, mem)
1601 rtx *insn_list, *mem_list, insn, mem;
1602 {
1603 register rtx link;
1604
1605 if (unused_insn_list)
1606 {
1607 link = unused_insn_list;
1608 unused_insn_list = XEXP (link, 1);
1609 }
1610 else
1611 link = rtx_alloc (INSN_LIST);
1612 XEXP (link, 0) = insn;
1613 XEXP (link, 1) = *insn_list;
1614 *insn_list = link;
1615
1616 if (unused_expr_list)
1617 {
1618 link = unused_expr_list;
1619 unused_expr_list = XEXP (link, 1);
1620 }
1621 else
1622 link = rtx_alloc (EXPR_LIST);
1623 XEXP (link, 0) = mem;
1624 XEXP (link, 1) = *mem_list;
1625 *mem_list = link;
1626
1627 pending_lists_length++;
1628 }
1629 \f
1630 /* Make a dependency between every memory reference on the pending lists
1631 and INSN, thus flushing the pending lists. If ONLY_WRITE, don't flush
1632 the read list. */
1633
1634 static void
1635 flush_pending_lists (insn, only_write)
1636 rtx insn;
1637 int only_write;
1638 {
1639 rtx link;
1640
1641 while (pending_read_insns && ! only_write)
1642 {
1643 add_dependence (insn, XEXP (pending_read_insns, 0), REG_DEP_ANTI);
1644
1645 link = pending_read_insns;
1646 pending_read_insns = XEXP (pending_read_insns, 1);
1647 XEXP (link, 1) = unused_insn_list;
1648 unused_insn_list = link;
1649
1650 link = pending_read_mems;
1651 pending_read_mems = XEXP (pending_read_mems, 1);
1652 XEXP (link, 1) = unused_expr_list;
1653 unused_expr_list = link;
1654 }
1655 while (pending_write_insns)
1656 {
1657 add_dependence (insn, XEXP (pending_write_insns, 0), REG_DEP_ANTI);
1658
1659 link = pending_write_insns;
1660 pending_write_insns = XEXP (pending_write_insns, 1);
1661 XEXP (link, 1) = unused_insn_list;
1662 unused_insn_list = link;
1663
1664 link = pending_write_mems;
1665 pending_write_mems = XEXP (pending_write_mems, 1);
1666 XEXP (link, 1) = unused_expr_list;
1667 unused_expr_list = link;
1668 }
1669 pending_lists_length = 0;
1670
1671 if (last_pending_memory_flush)
1672 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1673
1674 last_pending_memory_flush = insn;
1675 }
1676
1677 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
1678 by the write to the destination of X, and reads of everything mentioned. */
1679
1680 static void
1681 sched_analyze_1 (x, insn)
1682 rtx x;
1683 rtx insn;
1684 {
1685 register int regno;
1686 register rtx dest = SET_DEST (x);
1687
1688 if (dest == 0)
1689 return;
1690
1691 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
1692 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1693 {
1694 if (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1695 {
1696 /* The second and third arguments are values read by this insn. */
1697 sched_analyze_2 (XEXP (dest, 1), insn);
1698 sched_analyze_2 (XEXP (dest, 2), insn);
1699 }
1700 dest = SUBREG_REG (dest);
1701 }
1702
1703 if (GET_CODE (dest) == REG)
1704 {
1705 register int i;
1706
1707 regno = REGNO (dest);
1708
1709 /* A hard reg in a wide mode may really be multiple registers.
1710 If so, mark all of them just like the first. */
1711 if (regno < FIRST_PSEUDO_REGISTER)
1712 {
1713 i = HARD_REGNO_NREGS (regno, GET_MODE (dest));
1714 while (--i >= 0)
1715 {
1716 rtx u;
1717
1718 for (u = reg_last_uses[regno+i]; u; u = XEXP (u, 1))
1719 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1720 reg_last_uses[regno + i] = 0;
1721 if (reg_last_sets[regno + i])
1722 add_dependence (insn, reg_last_sets[regno + i],
1723 REG_DEP_OUTPUT);
1724 reg_pending_sets[(regno + i) / REGSET_ELT_BITS]
1725 |= (REGSET_ELT_TYPE) 1 << ((regno + i) % REGSET_ELT_BITS);
1726 if ((call_used_regs[i] || global_regs[i])
1727 && last_function_call)
1728 /* Function calls clobber all call_used regs. */
1729 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1730 }
1731 }
1732 else
1733 {
1734 rtx u;
1735
1736 for (u = reg_last_uses[regno]; u; u = XEXP (u, 1))
1737 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1738 reg_last_uses[regno] = 0;
1739 if (reg_last_sets[regno])
1740 add_dependence (insn, reg_last_sets[regno], REG_DEP_OUTPUT);
1741 reg_pending_sets[regno / REGSET_ELT_BITS]
1742 |= (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
1743
1744 /* Pseudos that are REG_EQUIV to something may be replaced
1745 by that during reloading. We need only add dependencies for
1746 the address in the REG_EQUIV note. */
1747 if (! reload_completed
1748 && reg_known_equiv_p[regno]
1749 && GET_CODE (reg_known_value[regno]) == MEM)
1750 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1751
1752 /* Don't let it cross a call after scheduling if it doesn't
1753 already cross one. */
1754 if (reg_n_calls_crossed[regno] == 0 && last_function_call)
1755 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1756 }
1757 }
1758 else if (GET_CODE (dest) == MEM)
1759 {
1760 /* Writing memory. */
1761
1762 if (pending_lists_length > 32)
1763 {
1764 /* Flush all pending reads and writes to prevent the pending lists
1765 from getting any larger. Insn scheduling runs too slowly when
1766 these lists get long. The number 32 was chosen because it
1767 seems like a reasonable number. When compiling GCC with itself,
1768 this flush occurs 8 times for sparc, and 10 times for m88k using
1769 the number 32. */
1770 flush_pending_lists (insn, 0);
1771 }
1772 else
1773 {
1774 rtx pending, pending_mem;
1775
1776 pending = pending_read_insns;
1777 pending_mem = pending_read_mems;
1778 while (pending)
1779 {
1780 /* If a dependency already exists, don't create a new one. */
1781 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1782 if (anti_dependence (XEXP (pending_mem, 0), dest))
1783 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1784
1785 pending = XEXP (pending, 1);
1786 pending_mem = XEXP (pending_mem, 1);
1787 }
1788
1789 pending = pending_write_insns;
1790 pending_mem = pending_write_mems;
1791 while (pending)
1792 {
1793 /* If a dependency already exists, don't create a new one. */
1794 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1795 if (output_dependence (XEXP (pending_mem, 0), dest))
1796 add_dependence (insn, XEXP (pending, 0), REG_DEP_OUTPUT);
1797
1798 pending = XEXP (pending, 1);
1799 pending_mem = XEXP (pending_mem, 1);
1800 }
1801
1802 if (last_pending_memory_flush)
1803 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1804
1805 add_insn_mem_dependence (&pending_write_insns, &pending_write_mems,
1806 insn, dest);
1807 }
1808 sched_analyze_2 (XEXP (dest, 0), insn);
1809 }
1810
1811 /* Analyze reads. */
1812 if (GET_CODE (x) == SET)
1813 sched_analyze_2 (SET_SRC (x), insn);
1814 }
1815
1816 /* Analyze the uses of memory and registers in rtx X in INSN. */
1817
1818 static void
1819 sched_analyze_2 (x, insn)
1820 rtx x;
1821 rtx insn;
1822 {
1823 register int i;
1824 register int j;
1825 register enum rtx_code code;
1826 register char *fmt;
1827
1828 if (x == 0)
1829 return;
1830
1831 code = GET_CODE (x);
1832
1833 switch (code)
1834 {
1835 case CONST_INT:
1836 case CONST_DOUBLE:
1837 case SYMBOL_REF:
1838 case CONST:
1839 case LABEL_REF:
1840 /* Ignore constants. Note that we must handle CONST_DOUBLE here
1841 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
1842 this does not mean that this insn is using cc0. */
1843 return;
1844
1845 #ifdef HAVE_cc0
1846 case CC0:
1847 {
1848 rtx link, prev;
1849
1850 /* User of CC0 depends on immediately preceding insn. */
1851 SCHED_GROUP_P (insn) = 1;
1852
1853 /* There may be a note before this insn now, but all notes will
1854 be removed before we actually try to schedule the insns, so
1855 it won't cause a problem later. We must avoid it here though. */
1856 prev = prev_nonnote_insn (insn);
1857
1858 /* Make a copy of all dependencies on the immediately previous insn,
1859 and add to this insn. This is so that all the dependencies will
1860 apply to the group. Remove an explicit dependence on this insn
1861 as SCHED_GROUP_P now represents it. */
1862
1863 if (find_insn_list (prev, LOG_LINKS (insn)))
1864 remove_dependence (insn, prev);
1865
1866 for (link = LOG_LINKS (prev); link; link = XEXP (link, 1))
1867 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1868
1869 return;
1870 }
1871 #endif
1872
1873 case REG:
1874 {
1875 int regno = REGNO (x);
1876 if (regno < FIRST_PSEUDO_REGISTER)
1877 {
1878 int i;
1879
1880 i = HARD_REGNO_NREGS (regno, GET_MODE (x));
1881 while (--i >= 0)
1882 {
1883 reg_last_uses[regno + i]
1884 = gen_rtx (INSN_LIST, VOIDmode,
1885 insn, reg_last_uses[regno + i]);
1886 if (reg_last_sets[regno + i])
1887 add_dependence (insn, reg_last_sets[regno + i], 0);
1888 if ((call_used_regs[regno + i] || global_regs[regno + i])
1889 && last_function_call)
1890 /* Function calls clobber all call_used regs. */
1891 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1892 }
1893 }
1894 else
1895 {
1896 reg_last_uses[regno]
1897 = gen_rtx (INSN_LIST, VOIDmode, insn, reg_last_uses[regno]);
1898 if (reg_last_sets[regno])
1899 add_dependence (insn, reg_last_sets[regno], 0);
1900
1901 /* Pseudos that are REG_EQUIV to something may be replaced
1902 by that during reloading. We need only add dependencies for
1903 the address in the REG_EQUIV note. */
1904 if (! reload_completed
1905 && reg_known_equiv_p[regno]
1906 && GET_CODE (reg_known_value[regno]) == MEM)
1907 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1908
1909 /* If the register does not already cross any calls, then add this
1910 insn to the sched_before_next_call list so that it will still
1911 not cross calls after scheduling. */
1912 if (reg_n_calls_crossed[regno] == 0)
1913 add_dependence (sched_before_next_call, insn, REG_DEP_ANTI);
1914 }
1915 return;
1916 }
1917
1918 case MEM:
1919 {
1920 /* Reading memory. */
1921
1922 rtx pending, pending_mem;
1923
1924 pending = pending_read_insns;
1925 pending_mem = pending_read_mems;
1926 while (pending)
1927 {
1928 /* If a dependency already exists, don't create a new one. */
1929 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1930 if (read_dependence (XEXP (pending_mem, 0), x))
1931 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1932
1933 pending = XEXP (pending, 1);
1934 pending_mem = XEXP (pending_mem, 1);
1935 }
1936
1937 pending = pending_write_insns;
1938 pending_mem = pending_write_mems;
1939 while (pending)
1940 {
1941 /* If a dependency already exists, don't create a new one. */
1942 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1943 if (true_dependence (XEXP (pending_mem, 0), x))
1944 add_dependence (insn, XEXP (pending, 0), 0);
1945
1946 pending = XEXP (pending, 1);
1947 pending_mem = XEXP (pending_mem, 1);
1948 }
1949 if (last_pending_memory_flush)
1950 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1951
1952 /* Always add these dependencies to pending_reads, since
1953 this insn may be followed by a write. */
1954 add_insn_mem_dependence (&pending_read_insns, &pending_read_mems,
1955 insn, x);
1956
1957 /* Take advantage of tail recursion here. */
1958 sched_analyze_2 (XEXP (x, 0), insn);
1959 return;
1960 }
1961
1962 case ASM_OPERANDS:
1963 case ASM_INPUT:
1964 case UNSPEC_VOLATILE:
1965 case TRAP_IF:
1966 {
1967 rtx u;
1968
1969 /* Traditional and volatile asm instructions must be considered to use
1970 and clobber all hard registers, all pseudo-registers and all of
1971 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1972
1973 Consider for instance a volatile asm that changes the fpu rounding
1974 mode. An insn should not be moved across this even if it only uses
1975 pseudo-regs because it might give an incorrectly rounded result. */
1976 if (code != ASM_OPERANDS || MEM_VOLATILE_P (x))
1977 {
1978 int max_reg = max_reg_num ();
1979 for (i = 0; i < max_reg; i++)
1980 {
1981 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1982 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1983 reg_last_uses[i] = 0;
1984 if (reg_last_sets[i])
1985 add_dependence (insn, reg_last_sets[i], 0);
1986 }
1987 reg_pending_sets_all = 1;
1988
1989 flush_pending_lists (insn, 0);
1990 }
1991
1992 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1993 We can not just fall through here since then we would be confused
1994 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1995 traditional asms unlike their normal usage. */
1996
1997 if (code == ASM_OPERANDS)
1998 {
1999 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
2000 sched_analyze_2 (ASM_OPERANDS_INPUT (x, j), insn);
2001 return;
2002 }
2003 break;
2004 }
2005
2006 case PRE_DEC:
2007 case POST_DEC:
2008 case PRE_INC:
2009 case POST_INC:
2010 /* These both read and modify the result. We must handle them as writes
2011 to get proper dependencies for following instructions. We must handle
2012 them as reads to get proper dependencies from this to previous
2013 instructions. Thus we need to pass them to both sched_analyze_1
2014 and sched_analyze_2. We must call sched_analyze_2 first in order
2015 to get the proper antecedent for the read. */
2016 sched_analyze_2 (XEXP (x, 0), insn);
2017 sched_analyze_1 (x, insn);
2018 return;
2019 }
2020
2021 /* Other cases: walk the insn. */
2022 fmt = GET_RTX_FORMAT (code);
2023 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2024 {
2025 if (fmt[i] == 'e')
2026 sched_analyze_2 (XEXP (x, i), insn);
2027 else if (fmt[i] == 'E')
2028 for (j = 0; j < XVECLEN (x, i); j++)
2029 sched_analyze_2 (XVECEXP (x, i, j), insn);
2030 }
2031 }
2032
2033 /* Analyze an INSN with pattern X to find all dependencies. */
2034
2035 static void
2036 sched_analyze_insn (x, insn, loop_notes)
2037 rtx x, insn;
2038 rtx loop_notes;
2039 {
2040 register RTX_CODE code = GET_CODE (x);
2041 rtx link;
2042 int maxreg = max_reg_num ();
2043 int i;
2044
2045 if (code == SET || code == CLOBBER)
2046 sched_analyze_1 (x, insn);
2047 else if (code == PARALLEL)
2048 {
2049 register int i;
2050 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
2051 {
2052 code = GET_CODE (XVECEXP (x, 0, i));
2053 if (code == SET || code == CLOBBER)
2054 sched_analyze_1 (XVECEXP (x, 0, i), insn);
2055 else
2056 sched_analyze_2 (XVECEXP (x, 0, i), insn);
2057 }
2058 }
2059 else
2060 sched_analyze_2 (x, insn);
2061
2062 /* Mark registers CLOBBERED or used by called function. */
2063 if (GET_CODE (insn) == CALL_INSN)
2064 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
2065 {
2066 if (GET_CODE (XEXP (link, 0)) == CLOBBER)
2067 sched_analyze_1 (XEXP (link, 0), insn);
2068 else
2069 sched_analyze_2 (XEXP (link, 0), insn);
2070 }
2071
2072 /* If there is a LOOP_{BEG,END} note in the middle of a basic block, then
2073 we must be sure that no instructions are scheduled across it.
2074 Otherwise, the reg_n_refs info (which depends on loop_depth) would
2075 become incorrect. */
2076
2077 if (loop_notes)
2078 {
2079 int max_reg = max_reg_num ();
2080 rtx link;
2081
2082 for (i = 0; i < max_reg; i++)
2083 {
2084 rtx u;
2085 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2086 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2087 reg_last_uses[i] = 0;
2088 if (reg_last_sets[i])
2089 add_dependence (insn, reg_last_sets[i], 0);
2090 }
2091 reg_pending_sets_all = 1;
2092
2093 flush_pending_lists (insn, 0);
2094
2095 link = loop_notes;
2096 while (XEXP (link, 1))
2097 link = XEXP (link, 1);
2098 XEXP (link, 1) = REG_NOTES (insn);
2099 REG_NOTES (insn) = loop_notes;
2100 }
2101
2102 /* After reload, it is possible for an instruction to have a REG_DEAD note
2103 for a register that actually dies a few instructions earlier. For
2104 example, this can happen with SECONDARY_MEMORY_NEEDED reloads.
2105 In this case, we must consider the insn to use the register mentioned
2106 in the REG_DEAD note. Otherwise, we may accidentally move this insn
2107 after another insn that sets the register, thus getting obviously invalid
2108 rtl. This confuses reorg which believes that REG_DEAD notes are still
2109 meaningful.
2110
2111 ??? We would get better code if we fixed reload to put the REG_DEAD
2112 notes in the right places, but that may not be worth the effort. */
2113
2114 if (reload_completed)
2115 {
2116 rtx note;
2117
2118 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2119 if (REG_NOTE_KIND (note) == REG_DEAD)
2120 sched_analyze_2 (XEXP (note, 0), insn);
2121 }
2122
2123 for (i = 0; i < regset_size; i++)
2124 {
2125 REGSET_ELT_TYPE sets = reg_pending_sets[i];
2126 if (sets)
2127 {
2128 register int bit;
2129 for (bit = 0; bit < REGSET_ELT_BITS; bit++)
2130 if (sets & ((REGSET_ELT_TYPE) 1 << bit))
2131 reg_last_sets[i * REGSET_ELT_BITS + bit] = insn;
2132 reg_pending_sets[i] = 0;
2133 }
2134 }
2135 if (reg_pending_sets_all)
2136 {
2137 for (i = 0; i < maxreg; i++)
2138 reg_last_sets[i] = insn;
2139 reg_pending_sets_all = 0;
2140 }
2141
2142 /* Handle function calls and function returns created by the epilogue
2143 threading code. */
2144 if (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN)
2145 {
2146 rtx dep_insn;
2147 rtx prev_dep_insn;
2148
2149 /* When scheduling instructions, we make sure calls don't lose their
2150 accompanying USE insns by depending them one on another in order.
2151
2152 Also, we must do the same thing for returns created by the epilogue
2153 threading code. Note this code works only in this special case,
2154 because other passes make no guarantee that they will never emit
2155 an instruction between a USE and a RETURN. There is such a guarantee
2156 for USE instructions immediately before a call. */
2157
2158 prev_dep_insn = insn;
2159 dep_insn = PREV_INSN (insn);
2160 while (GET_CODE (dep_insn) == INSN
2161 && GET_CODE (PATTERN (dep_insn)) == USE
2162 && GET_CODE (XEXP (PATTERN (dep_insn), 0)) == REG)
2163 {
2164 SCHED_GROUP_P (prev_dep_insn) = 1;
2165
2166 /* Make a copy of all dependencies on dep_insn, and add to insn.
2167 This is so that all of the dependencies will apply to the
2168 group. */
2169
2170 for (link = LOG_LINKS (dep_insn); link; link = XEXP (link, 1))
2171 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
2172
2173 prev_dep_insn = dep_insn;
2174 dep_insn = PREV_INSN (dep_insn);
2175 }
2176 }
2177 }
2178
2179 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
2180 for every dependency. */
2181
2182 static int
2183 sched_analyze (head, tail)
2184 rtx head, tail;
2185 {
2186 register rtx insn;
2187 register int n_insns = 0;
2188 register rtx u;
2189 register int luid = 0;
2190 rtx loop_notes = 0;
2191
2192 for (insn = head; ; insn = NEXT_INSN (insn))
2193 {
2194 INSN_LUID (insn) = luid++;
2195
2196 if (GET_CODE (insn) == INSN || GET_CODE (insn) == JUMP_INSN)
2197 {
2198 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
2199 loop_notes = 0;
2200 n_insns += 1;
2201 }
2202 else if (GET_CODE (insn) == CALL_INSN)
2203 {
2204 rtx x;
2205 register int i;
2206
2207 /* Any instruction using a hard register which may get clobbered
2208 by a call needs to be marked as dependent on this call.
2209 This prevents a use of a hard return reg from being moved
2210 past a void call (i.e. it does not explicitly set the hard
2211 return reg). */
2212
2213 /* If this call is followed by a NOTE_INSN_SETJMP, then assume that
2214 all registers, not just hard registers, may be clobbered by this
2215 call. */
2216
2217 /* Insn, being a CALL_INSN, magically depends on
2218 `last_function_call' already. */
2219
2220 if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == NOTE
2221 && NOTE_LINE_NUMBER (NEXT_INSN (insn)) == NOTE_INSN_SETJMP)
2222 {
2223 int max_reg = max_reg_num ();
2224 for (i = 0; i < max_reg; i++)
2225 {
2226 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2227 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2228 reg_last_uses[i] = 0;
2229 if (reg_last_sets[i])
2230 add_dependence (insn, reg_last_sets[i], 0);
2231 }
2232 reg_pending_sets_all = 1;
2233
2234 /* Add a fake REG_NOTE which we will later convert
2235 back into a NOTE_INSN_SETJMP note. */
2236 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_DEAD,
2237 GEN_INT (NOTE_INSN_SETJMP),
2238 REG_NOTES (insn));
2239 }
2240 else
2241 {
2242 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
2243 if (call_used_regs[i] || global_regs[i])
2244 {
2245 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
2246 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
2247 reg_last_uses[i] = 0;
2248 if (reg_last_sets[i])
2249 add_dependence (insn, reg_last_sets[i], REG_DEP_ANTI);
2250 reg_pending_sets[i / REGSET_ELT_BITS]
2251 |= (REGSET_ELT_TYPE) 1 << (i % REGSET_ELT_BITS);
2252 }
2253 }
2254
2255 /* For each insn which shouldn't cross a call, add a dependence
2256 between that insn and this call insn. */
2257 x = LOG_LINKS (sched_before_next_call);
2258 while (x)
2259 {
2260 add_dependence (insn, XEXP (x, 0), REG_DEP_ANTI);
2261 x = XEXP (x, 1);
2262 }
2263 LOG_LINKS (sched_before_next_call) = 0;
2264
2265 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
2266 loop_notes = 0;
2267
2268 /* In the absence of interprocedural alias analysis, we must flush
2269 all pending reads and writes, and start new dependencies starting
2270 from here. But only flush writes for constant calls (which may
2271 be passed a pointer to something we haven't written yet). */
2272 flush_pending_lists (insn, CONST_CALL_P (insn));
2273
2274 /* Depend this function call (actually, the user of this
2275 function call) on all hard register clobberage. */
2276 last_function_call = insn;
2277 n_insns += 1;
2278 }
2279 else if (GET_CODE (insn) == NOTE
2280 && (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG
2281 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_END
2282 || (NOTE_LINE_NUMBER (insn) == NOTE_INSN_SETJMP
2283 && GET_CODE (PREV_INSN (insn)) != CALL_INSN)))
2284 {
2285 loop_notes = gen_rtx (EXPR_LIST, REG_DEAD,
2286 GEN_INT (NOTE_LINE_NUMBER (insn)), loop_notes);
2287 CONST_CALL_P (loop_notes) = CONST_CALL_P (insn);
2288 }
2289
2290 if (insn == tail)
2291 return n_insns;
2292 }
2293
2294 abort ();
2295 }
2296 \f
2297 /* Called when we see a set of a register. If death is true, then we are
2298 scanning backwards. Mark that register as unborn. If nobody says
2299 otherwise, that is how things will remain. If death is false, then we
2300 are scanning forwards. Mark that register as being born. */
2301
2302 static void
2303 sched_note_set (b, x, death)
2304 int b;
2305 rtx x;
2306 int death;
2307 {
2308 register int regno;
2309 register rtx reg = SET_DEST (x);
2310 int subreg_p = 0;
2311
2312 if (reg == 0)
2313 return;
2314
2315 while (GET_CODE (reg) == SUBREG || GET_CODE (reg) == STRICT_LOW_PART
2316 || GET_CODE (reg) == SIGN_EXTRACT || GET_CODE (reg) == ZERO_EXTRACT)
2317 {
2318 /* Must treat modification of just one hardware register of a multi-reg
2319 value or just a byte field of a register exactly the same way that
2320 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
2321 does not kill the entire register. */
2322 if (GET_CODE (reg) != SUBREG
2323 || REG_SIZE (SUBREG_REG (reg)) > REG_SIZE (reg))
2324 subreg_p = 1;
2325
2326 reg = SUBREG_REG (reg);
2327 }
2328
2329 if (GET_CODE (reg) != REG)
2330 return;
2331
2332 /* Global registers are always live, so the code below does not apply
2333 to them. */
2334
2335 regno = REGNO (reg);
2336 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2337 {
2338 register int offset = regno / REGSET_ELT_BITS;
2339 register REGSET_ELT_TYPE bit
2340 = (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
2341
2342 if (death)
2343 {
2344 /* If we only set part of the register, then this set does not
2345 kill it. */
2346 if (subreg_p)
2347 return;
2348
2349 /* Try killing this register. */
2350 if (regno < FIRST_PSEUDO_REGISTER)
2351 {
2352 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
2353 while (--j >= 0)
2354 {
2355 offset = (regno + j) / REGSET_ELT_BITS;
2356 bit = (REGSET_ELT_TYPE) 1 << ((regno + j) % REGSET_ELT_BITS);
2357
2358 bb_live_regs[offset] &= ~bit;
2359 bb_dead_regs[offset] |= bit;
2360 }
2361 }
2362 else
2363 {
2364 bb_live_regs[offset] &= ~bit;
2365 bb_dead_regs[offset] |= bit;
2366 }
2367 }
2368 else
2369 {
2370 /* Make the register live again. */
2371 if (regno < FIRST_PSEUDO_REGISTER)
2372 {
2373 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
2374 while (--j >= 0)
2375 {
2376 offset = (regno + j) / REGSET_ELT_BITS;
2377 bit = (REGSET_ELT_TYPE) 1 << ((regno + j) % REGSET_ELT_BITS);
2378
2379 bb_live_regs[offset] |= bit;
2380 bb_dead_regs[offset] &= ~bit;
2381 }
2382 }
2383 else
2384 {
2385 bb_live_regs[offset] |= bit;
2386 bb_dead_regs[offset] &= ~bit;
2387 }
2388 }
2389 }
2390 }
2391 \f
2392 /* Macros and functions for keeping the priority queue sorted, and
2393 dealing with queueing and dequeueing of instructions. */
2394
2395 #define SCHED_SORT(READY, NEW_READY, OLD_READY) \
2396 do { if ((NEW_READY) - (OLD_READY) == 1) \
2397 swap_sort (READY, NEW_READY); \
2398 else if ((NEW_READY) - (OLD_READY) > 1) \
2399 qsort (READY, NEW_READY, sizeof (rtx), rank_for_schedule); } \
2400 while (0)
2401
2402 /* Returns a positive value if y is preferred; returns a negative value if
2403 x is preferred. Should never return 0, since that will make the sort
2404 unstable. */
2405
2406 static int
2407 rank_for_schedule (x, y)
2408 rtx *x, *y;
2409 {
2410 rtx tmp = *y;
2411 rtx tmp2 = *x;
2412 rtx link;
2413 int tmp_class, tmp2_class;
2414 int value;
2415
2416 /* Choose the instruction with the highest priority, if different. */
2417 if (value = INSN_PRIORITY (tmp) - INSN_PRIORITY (tmp2))
2418 return value;
2419
2420 if (last_scheduled_insn)
2421 {
2422 /* Classify the instructions into three classes:
2423 1) Data dependent on last schedule insn.
2424 2) Anti/Output dependent on last scheduled insn.
2425 3) Independent of last scheduled insn, or has latency of one.
2426 Choose the insn from the highest numbered class if different. */
2427 link = find_insn_list (tmp, LOG_LINKS (last_scheduled_insn));
2428 if (link == 0 || insn_cost (tmp, link, last_scheduled_insn) == 1)
2429 tmp_class = 3;
2430 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
2431 tmp_class = 1;
2432 else
2433 tmp_class = 2;
2434
2435 link = find_insn_list (tmp2, LOG_LINKS (last_scheduled_insn));
2436 if (link == 0 || insn_cost (tmp2, link, last_scheduled_insn) == 1)
2437 tmp2_class = 3;
2438 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
2439 tmp2_class = 1;
2440 else
2441 tmp2_class = 2;
2442
2443 if (value = tmp_class - tmp2_class)
2444 return value;
2445 }
2446
2447 /* If insns are equally good, sort by INSN_LUID (original insn order),
2448 so that we make the sort stable. This minimizes instruction movement,
2449 thus minimizing sched's effect on debugging and cross-jumping. */
2450 return INSN_LUID (tmp) - INSN_LUID (tmp2);
2451 }
2452
2453 /* Resort the array A in which only element at index N may be out of order. */
2454
2455 __inline static void
2456 swap_sort (a, n)
2457 rtx *a;
2458 int n;
2459 {
2460 rtx insn = a[n-1];
2461 int i = n-2;
2462
2463 while (i >= 0 && rank_for_schedule (a+i, &insn) >= 0)
2464 {
2465 a[i+1] = a[i];
2466 i -= 1;
2467 }
2468 a[i+1] = insn;
2469 }
2470
2471 static int max_priority;
2472
2473 /* Add INSN to the insn queue so that it fires at least N_CYCLES
2474 before the currently executing insn. */
2475
2476 __inline static void
2477 queue_insn (insn, n_cycles)
2478 rtx insn;
2479 int n_cycles;
2480 {
2481 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
2482 NEXT_INSN (insn) = insn_queue[next_q];
2483 insn_queue[next_q] = insn;
2484 q_size += 1;
2485 }
2486
2487 /* Return nonzero if PAT is the pattern of an insn which makes a
2488 register live. */
2489
2490 __inline static int
2491 birthing_insn_p (pat)
2492 rtx pat;
2493 {
2494 int j;
2495
2496 if (reload_completed == 1)
2497 return 0;
2498
2499 if (GET_CODE (pat) == SET
2500 && GET_CODE (SET_DEST (pat)) == REG)
2501 {
2502 rtx dest = SET_DEST (pat);
2503 int i = REGNO (dest);
2504 int offset = i / REGSET_ELT_BITS;
2505 REGSET_ELT_TYPE bit = (REGSET_ELT_TYPE) 1 << (i % REGSET_ELT_BITS);
2506
2507 /* It would be more accurate to use refers_to_regno_p or
2508 reg_mentioned_p to determine when the dest is not live before this
2509 insn. */
2510
2511 if (bb_live_regs[offset] & bit)
2512 return (reg_n_sets[i] == 1);
2513
2514 return 0;
2515 }
2516 if (GET_CODE (pat) == PARALLEL)
2517 {
2518 for (j = 0; j < XVECLEN (pat, 0); j++)
2519 if (birthing_insn_p (XVECEXP (pat, 0, j)))
2520 return 1;
2521 }
2522 return 0;
2523 }
2524
2525 /* PREV is an insn that is ready to execute. Adjust its priority if that
2526 will help shorten register lifetimes. */
2527
2528 __inline static void
2529 adjust_priority (prev)
2530 rtx prev;
2531 {
2532 /* Trying to shorten register lives after reload has completed
2533 is useless and wrong. It gives inaccurate schedules. */
2534 if (reload_completed == 0)
2535 {
2536 rtx note;
2537 int n_deaths = 0;
2538
2539 /* ??? This code has no effect, because REG_DEAD notes are removed
2540 before we ever get here. */
2541 for (note = REG_NOTES (prev); note; note = XEXP (note, 1))
2542 if (REG_NOTE_KIND (note) == REG_DEAD)
2543 n_deaths += 1;
2544
2545 /* Defer scheduling insns which kill registers, since that
2546 shortens register lives. Prefer scheduling insns which
2547 make registers live for the same reason. */
2548 switch (n_deaths)
2549 {
2550 default:
2551 INSN_PRIORITY (prev) >>= 3;
2552 break;
2553 case 3:
2554 INSN_PRIORITY (prev) >>= 2;
2555 break;
2556 case 2:
2557 case 1:
2558 INSN_PRIORITY (prev) >>= 1;
2559 break;
2560 case 0:
2561 if (birthing_insn_p (PATTERN (prev)))
2562 {
2563 int max = max_priority;
2564
2565 if (max > INSN_PRIORITY (prev))
2566 INSN_PRIORITY (prev) = max;
2567 }
2568 break;
2569 }
2570 #ifdef ADJUST_PRIORITY
2571 ADJUST_PRIORITY (prev);
2572 #endif
2573 }
2574 }
2575
2576 /* INSN is the "currently executing insn". Launch each insn which was
2577 waiting on INSN (in the backwards dataflow sense). READY is a
2578 vector of insns which are ready to fire. N_READY is the number of
2579 elements in READY. CLOCK is the current virtual cycle. */
2580
2581 static int
2582 schedule_insn (insn, ready, n_ready, clock)
2583 rtx insn;
2584 rtx *ready;
2585 int n_ready;
2586 int clock;
2587 {
2588 rtx link;
2589 int new_ready = n_ready;
2590
2591 if (MAX_BLOCKAGE > 1)
2592 schedule_unit (insn_unit (insn), insn, clock);
2593
2594 if (LOG_LINKS (insn) == 0)
2595 return n_ready;
2596
2597 /* This is used by the function adjust_priority above. */
2598 if (n_ready > 0)
2599 max_priority = MAX (INSN_PRIORITY (ready[0]), INSN_PRIORITY (insn));
2600 else
2601 max_priority = INSN_PRIORITY (insn);
2602
2603 for (link = LOG_LINKS (insn); link != 0; link = XEXP (link, 1))
2604 {
2605 rtx prev = XEXP (link, 0);
2606 int cost = insn_cost (prev, link, insn);
2607
2608 if ((INSN_REF_COUNT (prev) -= 1) != 0)
2609 {
2610 /* We satisfied one requirement to fire PREV. Record the earliest
2611 time when PREV can fire. No need to do this if the cost is 1,
2612 because PREV can fire no sooner than the next cycle. */
2613 if (cost > 1)
2614 INSN_TICK (prev) = MAX (INSN_TICK (prev), clock + cost);
2615 }
2616 else
2617 {
2618 /* We satisfied the last requirement to fire PREV. Ensure that all
2619 timing requirements are satisfied. */
2620 if (INSN_TICK (prev) - clock > cost)
2621 cost = INSN_TICK (prev) - clock;
2622
2623 /* Adjust the priority of PREV and either put it on the ready
2624 list or queue it. */
2625 adjust_priority (prev);
2626 if (cost <= 1)
2627 ready[new_ready++] = prev;
2628 else
2629 queue_insn (prev, cost);
2630 }
2631 }
2632
2633 return new_ready;
2634 }
2635
2636 /* Given N_READY insns in the ready list READY at time CLOCK, queue
2637 those that are blocked due to function unit hazards and rearrange
2638 the remaining ones to minimize subsequent function unit hazards. */
2639
2640 static int
2641 schedule_select (ready, n_ready, clock, file)
2642 rtx *ready;
2643 int n_ready, clock;
2644 FILE *file;
2645 {
2646 int pri = INSN_PRIORITY (ready[0]);
2647 int i, j, k, q, cost, best_cost, best_insn = 0, new_ready = n_ready;
2648 rtx insn;
2649
2650 /* Work down the ready list in groups of instructions with the same
2651 priority value. Queue insns in the group that are blocked and
2652 select among those that remain for the one with the largest
2653 potential hazard. */
2654 for (i = 0; i < n_ready; i = j)
2655 {
2656 int opri = pri;
2657 for (j = i + 1; j < n_ready; j++)
2658 if ((pri = INSN_PRIORITY (ready[j])) != opri)
2659 break;
2660
2661 /* Queue insns in the group that are blocked. */
2662 for (k = i, q = 0; k < j; k++)
2663 {
2664 insn = ready[k];
2665 if ((cost = actual_hazard (insn_unit (insn), insn, clock, 0)) != 0)
2666 {
2667 q++;
2668 ready[k] = 0;
2669 queue_insn (insn, cost);
2670 if (file)
2671 fprintf (file, "\n;; blocking insn %d for %d cycles",
2672 INSN_UID (insn), cost);
2673 }
2674 }
2675 new_ready -= q;
2676
2677 /* Check the next group if all insns were queued. */
2678 if (j - i - q == 0)
2679 continue;
2680
2681 /* If more than one remains, select the first one with the largest
2682 potential hazard. */
2683 else if (j - i - q > 1)
2684 {
2685 best_cost = -1;
2686 for (k = i; k < j; k++)
2687 {
2688 if ((insn = ready[k]) == 0)
2689 continue;
2690 if ((cost = potential_hazard (insn_unit (insn), insn, 0))
2691 > best_cost)
2692 {
2693 best_cost = cost;
2694 best_insn = k;
2695 }
2696 }
2697 }
2698 /* We have found a suitable insn to schedule. */
2699 break;
2700 }
2701
2702 /* Move the best insn to be front of the ready list. */
2703 if (best_insn != 0)
2704 {
2705 if (file)
2706 {
2707 fprintf (file, ", now");
2708 for (i = 0; i < n_ready; i++)
2709 if (ready[i])
2710 fprintf (file, " %d", INSN_UID (ready[i]));
2711 fprintf (file, "\n;; insn %d has a greater potential hazard",
2712 INSN_UID (ready[best_insn]));
2713 }
2714 for (i = best_insn; i > 0; i--)
2715 {
2716 insn = ready[i-1];
2717 ready[i-1] = ready[i];
2718 ready[i] = insn;
2719 }
2720 }
2721
2722 /* Compact the ready list. */
2723 if (new_ready < n_ready)
2724 for (i = j = 0; i < n_ready; i++)
2725 if (ready[i])
2726 ready[j++] = ready[i];
2727
2728 return new_ready;
2729 }
2730
2731 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
2732 dead_notes list. */
2733
2734 static void
2735 create_reg_dead_note (reg, insn)
2736 rtx reg, insn;
2737 {
2738 rtx link;
2739
2740 /* The number of registers killed after scheduling must be the same as the
2741 number of registers killed before scheduling. The number of REG_DEAD
2742 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
2743 might become one DImode hard register REG_DEAD note, but the number of
2744 registers killed will be conserved.
2745
2746 We carefully remove REG_DEAD notes from the dead_notes list, so that
2747 there will be none left at the end. If we run out early, then there
2748 is a bug somewhere in flow, combine and/or sched. */
2749
2750 if (dead_notes == 0)
2751 {
2752 #if 1
2753 abort ();
2754 #else
2755 link = rtx_alloc (EXPR_LIST);
2756 PUT_REG_NOTE_KIND (link, REG_DEAD);
2757 #endif
2758 }
2759 else
2760 {
2761 /* Number of regs killed by REG. */
2762 int regs_killed = (REGNO (reg) >= FIRST_PSEUDO_REGISTER ? 1
2763 : HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg)));
2764 /* Number of regs killed by REG_DEAD notes taken off the list. */
2765 int reg_note_regs;
2766
2767 link = dead_notes;
2768 reg_note_regs = (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2769 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2770 GET_MODE (XEXP (link, 0))));
2771 while (reg_note_regs < regs_killed)
2772 {
2773 link = XEXP (link, 1);
2774 reg_note_regs += (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2775 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2776 GET_MODE (XEXP (link, 0))));
2777 }
2778 dead_notes = XEXP (link, 1);
2779
2780 /* If we took too many regs kills off, put the extra ones back. */
2781 while (reg_note_regs > regs_killed)
2782 {
2783 rtx temp_reg, temp_link;
2784
2785 temp_reg = gen_rtx (REG, word_mode, 0);
2786 temp_link = rtx_alloc (EXPR_LIST);
2787 PUT_REG_NOTE_KIND (temp_link, REG_DEAD);
2788 XEXP (temp_link, 0) = temp_reg;
2789 XEXP (temp_link, 1) = dead_notes;
2790 dead_notes = temp_link;
2791 reg_note_regs--;
2792 }
2793 }
2794
2795 XEXP (link, 0) = reg;
2796 XEXP (link, 1) = REG_NOTES (insn);
2797 REG_NOTES (insn) = link;
2798 }
2799
2800 /* Subroutine on attach_deaths_insn--handles the recursive search
2801 through INSN. If SET_P is true, then x is being modified by the insn. */
2802
2803 static void
2804 attach_deaths (x, insn, set_p)
2805 rtx x;
2806 rtx insn;
2807 int set_p;
2808 {
2809 register int i;
2810 register int j;
2811 register enum rtx_code code;
2812 register char *fmt;
2813
2814 if (x == 0)
2815 return;
2816
2817 code = GET_CODE (x);
2818
2819 switch (code)
2820 {
2821 case CONST_INT:
2822 case CONST_DOUBLE:
2823 case LABEL_REF:
2824 case SYMBOL_REF:
2825 case CONST:
2826 case CODE_LABEL:
2827 case PC:
2828 case CC0:
2829 /* Get rid of the easy cases first. */
2830 return;
2831
2832 case REG:
2833 {
2834 /* If the register dies in this insn, queue that note, and mark
2835 this register as needing to die. */
2836 /* This code is very similar to mark_used_1 (if set_p is false)
2837 and mark_set_1 (if set_p is true) in flow.c. */
2838
2839 register int regno = REGNO (x);
2840 register int offset = regno / REGSET_ELT_BITS;
2841 register REGSET_ELT_TYPE bit
2842 = (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
2843 REGSET_ELT_TYPE all_needed = (old_live_regs[offset] & bit);
2844 REGSET_ELT_TYPE some_needed = (old_live_regs[offset] & bit);
2845
2846 if (set_p)
2847 return;
2848
2849 if (regno < FIRST_PSEUDO_REGISTER)
2850 {
2851 int n;
2852
2853 n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2854 while (--n > 0)
2855 {
2856 some_needed |= (old_live_regs[(regno + n) / REGSET_ELT_BITS]
2857 & ((REGSET_ELT_TYPE) 1
2858 << ((regno + n) % REGSET_ELT_BITS)));
2859 all_needed &= (old_live_regs[(regno + n) / REGSET_ELT_BITS]
2860 & ((REGSET_ELT_TYPE) 1
2861 << ((regno + n) % REGSET_ELT_BITS)));
2862 }
2863 }
2864
2865 /* If it wasn't live before we started, then add a REG_DEAD note.
2866 We must check the previous lifetime info not the current info,
2867 because we may have to execute this code several times, e.g.
2868 once for a clobber (which doesn't add a note) and later
2869 for a use (which does add a note).
2870
2871 Always make the register live. We must do this even if it was
2872 live before, because this may be an insn which sets and uses
2873 the same register, in which case the register has already been
2874 killed, so we must make it live again.
2875
2876 Global registers are always live, and should never have a REG_DEAD
2877 note added for them, so none of the code below applies to them. */
2878
2879 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2880 {
2881 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
2882 STACK_POINTER_REGNUM, since these are always considered to be
2883 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
2884 if (regno != FRAME_POINTER_REGNUM
2885 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2886 && ! (regno == HARD_FRAME_POINTER_REGNUM)
2887 #endif
2888 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
2889 && ! (regno == ARG_POINTER_REGNUM && fixed_regs[regno])
2890 #endif
2891 && regno != STACK_POINTER_REGNUM)
2892 {
2893 /* ??? It is perhaps a dead_or_set_p bug that it does
2894 not check for REG_UNUSED notes itself. This is necessary
2895 for the case where the SET_DEST is a subreg of regno, as
2896 dead_or_set_p handles subregs specially. */
2897 if (! all_needed && ! dead_or_set_p (insn, x)
2898 && ! find_reg_note (insn, REG_UNUSED, x))
2899 {
2900 /* Check for the case where the register dying partially
2901 overlaps the register set by this insn. */
2902 if (regno < FIRST_PSEUDO_REGISTER
2903 && HARD_REGNO_NREGS (regno, GET_MODE (x)) > 1)
2904 {
2905 int n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2906 while (--n >= 0)
2907 some_needed |= dead_or_set_regno_p (insn, regno + n);
2908 }
2909
2910 /* If none of the words in X is needed, make a REG_DEAD
2911 note. Otherwise, we must make partial REG_DEAD
2912 notes. */
2913 if (! some_needed)
2914 create_reg_dead_note (x, insn);
2915 else
2916 {
2917 int i;
2918
2919 /* Don't make a REG_DEAD note for a part of a
2920 register that is set in the insn. */
2921 for (i = HARD_REGNO_NREGS (regno, GET_MODE (x)) - 1;
2922 i >= 0; i--)
2923 if ((old_live_regs[(regno + i) / REGSET_ELT_BITS]
2924 & ((REGSET_ELT_TYPE) 1
2925 << ((regno +i) % REGSET_ELT_BITS))) == 0
2926 && ! dead_or_set_regno_p (insn, regno + i))
2927 create_reg_dead_note (gen_rtx (REG,
2928 reg_raw_mode[regno + i],
2929 regno + i),
2930 insn);
2931 }
2932 }
2933 }
2934
2935 if (regno < FIRST_PSEUDO_REGISTER)
2936 {
2937 int j = HARD_REGNO_NREGS (regno, GET_MODE (x));
2938 while (--j >= 0)
2939 {
2940 offset = (regno + j) / REGSET_ELT_BITS;
2941 bit
2942 = (REGSET_ELT_TYPE) 1 << ((regno + j) % REGSET_ELT_BITS);
2943
2944 bb_dead_regs[offset] &= ~bit;
2945 bb_live_regs[offset] |= bit;
2946 }
2947 }
2948 else
2949 {
2950 bb_dead_regs[offset] &= ~bit;
2951 bb_live_regs[offset] |= bit;
2952 }
2953 }
2954 return;
2955 }
2956
2957 case MEM:
2958 /* Handle tail-recursive case. */
2959 attach_deaths (XEXP (x, 0), insn, 0);
2960 return;
2961
2962 case SUBREG:
2963 case STRICT_LOW_PART:
2964 /* These two cases preserve the value of SET_P, so handle them
2965 separately. */
2966 attach_deaths (XEXP (x, 0), insn, set_p);
2967 return;
2968
2969 case ZERO_EXTRACT:
2970 case SIGN_EXTRACT:
2971 /* This case preserves the value of SET_P for the first operand, but
2972 clears it for the other two. */
2973 attach_deaths (XEXP (x, 0), insn, set_p);
2974 attach_deaths (XEXP (x, 1), insn, 0);
2975 attach_deaths (XEXP (x, 2), insn, 0);
2976 return;
2977
2978 default:
2979 /* Other cases: walk the insn. */
2980 fmt = GET_RTX_FORMAT (code);
2981 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2982 {
2983 if (fmt[i] == 'e')
2984 attach_deaths (XEXP (x, i), insn, 0);
2985 else if (fmt[i] == 'E')
2986 for (j = 0; j < XVECLEN (x, i); j++)
2987 attach_deaths (XVECEXP (x, i, j), insn, 0);
2988 }
2989 }
2990 }
2991
2992 /* After INSN has executed, add register death notes for each register
2993 that is dead after INSN. */
2994
2995 static void
2996 attach_deaths_insn (insn)
2997 rtx insn;
2998 {
2999 rtx x = PATTERN (insn);
3000 register RTX_CODE code = GET_CODE (x);
3001 rtx link;
3002
3003 if (code == SET)
3004 {
3005 attach_deaths (SET_SRC (x), insn, 0);
3006
3007 /* A register might die here even if it is the destination, e.g.
3008 it is the target of a volatile read and is otherwise unused.
3009 Hence we must always call attach_deaths for the SET_DEST. */
3010 attach_deaths (SET_DEST (x), insn, 1);
3011 }
3012 else if (code == PARALLEL)
3013 {
3014 register int i;
3015 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
3016 {
3017 code = GET_CODE (XVECEXP (x, 0, i));
3018 if (code == SET)
3019 {
3020 attach_deaths (SET_SRC (XVECEXP (x, 0, i)), insn, 0);
3021
3022 attach_deaths (SET_DEST (XVECEXP (x, 0, i)), insn, 1);
3023 }
3024 /* Flow does not add REG_DEAD notes to registers that die in
3025 clobbers, so we can't either. */
3026 else if (code != CLOBBER)
3027 attach_deaths (XVECEXP (x, 0, i), insn, 0);
3028 }
3029 }
3030 /* If this is a CLOBBER, only add REG_DEAD notes to registers inside a
3031 MEM being clobbered, just like flow. */
3032 else if (code == CLOBBER && GET_CODE (XEXP (x, 0)) == MEM)
3033 attach_deaths (XEXP (XEXP (x, 0), 0), insn, 0);
3034 /* Otherwise don't add a death note to things being clobbered. */
3035 else if (code != CLOBBER)
3036 attach_deaths (x, insn, 0);
3037
3038 /* Make death notes for things used in the called function. */
3039 if (GET_CODE (insn) == CALL_INSN)
3040 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
3041 attach_deaths (XEXP (XEXP (link, 0), 0), insn,
3042 GET_CODE (XEXP (link, 0)) == CLOBBER);
3043 }
3044
3045 /* Delete notes beginning with INSN and maybe put them in the chain
3046 of notes ended by NOTE_LIST.
3047 Returns the insn following the notes. */
3048
3049 static rtx
3050 unlink_notes (insn, tail)
3051 rtx insn, tail;
3052 {
3053 rtx prev = PREV_INSN (insn);
3054
3055 while (insn != tail && GET_CODE (insn) == NOTE)
3056 {
3057 rtx next = NEXT_INSN (insn);
3058 /* Delete the note from its current position. */
3059 if (prev)
3060 NEXT_INSN (prev) = next;
3061 if (next)
3062 PREV_INSN (next) = prev;
3063
3064 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
3065 /* Record line-number notes so they can be reused. */
3066 LINE_NOTE (insn) = insn;
3067
3068 /* Don't save away NOTE_INSN_SETJMPs, because they must remain
3069 immediately after the call they follow. We use a fake
3070 (REG_DEAD (const_int -1)) note to remember them.
3071 Likewise with NOTE_INSN_LOOP_BEG and NOTE_INSN_LOOP_END. */
3072 else if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_SETJMP
3073 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
3074 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END)
3075 {
3076 /* Insert the note at the end of the notes list. */
3077 PREV_INSN (insn) = note_list;
3078 if (note_list)
3079 NEXT_INSN (note_list) = insn;
3080 note_list = insn;
3081 }
3082
3083 insn = next;
3084 }
3085 return insn;
3086 }
3087
3088 /* Constructor for `sometimes' data structure. */
3089
3090 static int
3091 new_sometimes_live (regs_sometimes_live, offset, bit, sometimes_max)
3092 struct sometimes *regs_sometimes_live;
3093 int offset, bit;
3094 int sometimes_max;
3095 {
3096 register struct sometimes *p;
3097 register int regno = offset * REGSET_ELT_BITS + bit;
3098
3099 /* There should never be a register greater than max_regno here. If there
3100 is, it means that a define_split has created a new pseudo reg. This
3101 is not allowed, since there will not be flow info available for any
3102 new register, so catch the error here. */
3103 if (regno >= max_regno)
3104 abort ();
3105
3106 p = &regs_sometimes_live[sometimes_max];
3107 p->offset = offset;
3108 p->bit = bit;
3109 p->live_length = 0;
3110 p->calls_crossed = 0;
3111 sometimes_max++;
3112 return sometimes_max;
3113 }
3114
3115 /* Count lengths of all regs we are currently tracking,
3116 and find new registers no longer live. */
3117
3118 static void
3119 finish_sometimes_live (regs_sometimes_live, sometimes_max)
3120 struct sometimes *regs_sometimes_live;
3121 int sometimes_max;
3122 {
3123 int i;
3124
3125 for (i = 0; i < sometimes_max; i++)
3126 {
3127 register struct sometimes *p = &regs_sometimes_live[i];
3128 int regno;
3129
3130 regno = p->offset * REGSET_ELT_BITS + p->bit;
3131
3132 sched_reg_live_length[regno] += p->live_length;
3133 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3134 }
3135 }
3136
3137 /* Search INSN for fake REG_DEAD notes for NOTE_INSN_SETJMP,
3138 NOTE_INSN_LOOP_BEG, and NOTE_INSN_LOOP_END; and convert them back
3139 into NOTEs. LAST is the last instruction output by the instruction
3140 scheduler. Return the new value of LAST. */
3141
3142 static rtx
3143 reemit_notes (insn, last)
3144 rtx insn;
3145 rtx last;
3146 {
3147 rtx note;
3148
3149 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
3150 {
3151 if (REG_NOTE_KIND (note) == REG_DEAD
3152 && GET_CODE (XEXP (note, 0)) == CONST_INT)
3153 {
3154 if (INTVAL (XEXP (note, 0)) == NOTE_INSN_SETJMP)
3155 CONST_CALL_P (emit_note_after (INTVAL (XEXP (note, 0)), insn))
3156 = CONST_CALL_P (note);
3157 else
3158 last = emit_note_before (INTVAL (XEXP (note, 0)), last);
3159 remove_note (insn, note);
3160 }
3161 }
3162 return last;
3163 }
3164
3165 /* Use modified list scheduling to rearrange insns in basic block
3166 B. FILE, if nonzero, is where we dump interesting output about
3167 this pass. */
3168
3169 static void
3170 schedule_block (b, file)
3171 int b;
3172 FILE *file;
3173 {
3174 rtx insn, last;
3175 rtx *ready, link;
3176 int i, j, n_ready = 0, new_ready, n_insns;
3177 int sched_n_insns = 0;
3178 int clock;
3179 #define NEED_NOTHING 0
3180 #define NEED_HEAD 1
3181 #define NEED_TAIL 2
3182 int new_needs;
3183
3184 /* HEAD and TAIL delimit the region being scheduled. */
3185 rtx head = basic_block_head[b];
3186 rtx tail = basic_block_end[b];
3187 /* PREV_HEAD and NEXT_TAIL are the boundaries of the insns
3188 being scheduled. When the insns have been ordered,
3189 these insns delimit where the new insns are to be
3190 spliced back into the insn chain. */
3191 rtx next_tail;
3192 rtx prev_head;
3193
3194 /* Keep life information accurate. */
3195 register struct sometimes *regs_sometimes_live;
3196 int sometimes_max;
3197
3198 if (file)
3199 fprintf (file, ";;\t -- basic block number %d from %d to %d --\n",
3200 b, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
3201
3202 i = max_reg_num ();
3203 reg_last_uses = (rtx *) alloca (i * sizeof (rtx));
3204 bzero ((char *) reg_last_uses, i * sizeof (rtx));
3205 reg_last_sets = (rtx *) alloca (i * sizeof (rtx));
3206 bzero ((char *) reg_last_sets, i * sizeof (rtx));
3207 reg_pending_sets = (regset) alloca (regset_bytes);
3208 bzero ((char *) reg_pending_sets, regset_bytes);
3209 reg_pending_sets_all = 0;
3210 clear_units ();
3211
3212 /* Remove certain insns at the beginning from scheduling,
3213 by advancing HEAD. */
3214
3215 /* At the start of a function, before reload has run, don't delay getting
3216 parameters from hard registers into pseudo registers. */
3217 if (reload_completed == 0 && b == 0)
3218 {
3219 while (head != tail
3220 && GET_CODE (head) == NOTE
3221 && NOTE_LINE_NUMBER (head) != NOTE_INSN_FUNCTION_BEG)
3222 head = NEXT_INSN (head);
3223 while (head != tail
3224 && GET_CODE (head) == INSN
3225 && GET_CODE (PATTERN (head)) == SET)
3226 {
3227 rtx src = SET_SRC (PATTERN (head));
3228 while (GET_CODE (src) == SUBREG
3229 || GET_CODE (src) == SIGN_EXTEND
3230 || GET_CODE (src) == ZERO_EXTEND
3231 || GET_CODE (src) == SIGN_EXTRACT
3232 || GET_CODE (src) == ZERO_EXTRACT)
3233 src = XEXP (src, 0);
3234 if (GET_CODE (src) != REG
3235 || REGNO (src) >= FIRST_PSEUDO_REGISTER)
3236 break;
3237 /* Keep this insn from ever being scheduled. */
3238 INSN_REF_COUNT (head) = 1;
3239 head = NEXT_INSN (head);
3240 }
3241 }
3242
3243 /* Don't include any notes or labels at the beginning of the
3244 basic block, or notes at the ends of basic blocks. */
3245 while (head != tail)
3246 {
3247 if (GET_CODE (head) == NOTE)
3248 head = NEXT_INSN (head);
3249 else if (GET_CODE (tail) == NOTE)
3250 tail = PREV_INSN (tail);
3251 else if (GET_CODE (head) == CODE_LABEL)
3252 head = NEXT_INSN (head);
3253 else break;
3254 }
3255 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
3256 to schedule this block. */
3257 if (head == tail
3258 && (GET_CODE (head) == NOTE || GET_CODE (head) == CODE_LABEL))
3259 return;
3260
3261 #if 0
3262 /* This short-cut doesn't work. It does not count call insns crossed by
3263 registers in reg_sometimes_live. It does not mark these registers as
3264 dead if they die in this block. It does not mark these registers live
3265 (or create new reg_sometimes_live entries if necessary) if they are born
3266 in this block.
3267
3268 The easy solution is to just always schedule a block. This block only
3269 has one insn, so this won't slow down this pass by much. */
3270
3271 if (head == tail)
3272 return;
3273 #endif
3274
3275 /* Now HEAD through TAIL are the insns actually to be rearranged;
3276 Let PREV_HEAD and NEXT_TAIL enclose them. */
3277 prev_head = PREV_INSN (head);
3278 next_tail = NEXT_INSN (tail);
3279
3280 /* Initialize basic block data structures. */
3281 dead_notes = 0;
3282 pending_read_insns = 0;
3283 pending_read_mems = 0;
3284 pending_write_insns = 0;
3285 pending_write_mems = 0;
3286 pending_lists_length = 0;
3287 last_pending_memory_flush = 0;
3288 last_function_call = 0;
3289 last_scheduled_insn = 0;
3290
3291 LOG_LINKS (sched_before_next_call) = 0;
3292
3293 n_insns = sched_analyze (head, tail);
3294 if (n_insns == 0)
3295 {
3296 free_pending_lists ();
3297 return;
3298 }
3299
3300 /* Allocate vector to hold insns to be rearranged (except those
3301 insns which are controlled by an insn with SCHED_GROUP_P set).
3302 All these insns are included between ORIG_HEAD and ORIG_TAIL,
3303 as those variables ultimately are set up. */
3304 ready = (rtx *) alloca ((n_insns+1) * sizeof (rtx));
3305
3306 /* TAIL is now the last of the insns to be rearranged.
3307 Put those insns into the READY vector. */
3308 insn = tail;
3309
3310 /* For all branches, calls, uses, and cc0 setters, force them to remain
3311 in order at the end of the block by adding dependencies and giving
3312 the last a high priority. There may be notes present, and prev_head
3313 may also be a note.
3314
3315 Branches must obviously remain at the end. Calls should remain at the
3316 end since moving them results in worse register allocation. Uses remain
3317 at the end to ensure proper register allocation. cc0 setters remaim
3318 at the end because they can't be moved away from their cc0 user. */
3319 last = 0;
3320 while (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN
3321 || (GET_CODE (insn) == INSN
3322 && (GET_CODE (PATTERN (insn)) == USE
3323 #ifdef HAVE_cc0
3324 || sets_cc0_p (PATTERN (insn))
3325 #endif
3326 ))
3327 || GET_CODE (insn) == NOTE)
3328 {
3329 if (GET_CODE (insn) != NOTE)
3330 {
3331 priority (insn);
3332 if (last == 0)
3333 {
3334 ready[n_ready++] = insn;
3335 INSN_PRIORITY (insn) = TAIL_PRIORITY - i;
3336 INSN_REF_COUNT (insn) = 0;
3337 }
3338 else if (! find_insn_list (insn, LOG_LINKS (last)))
3339 {
3340 add_dependence (last, insn, REG_DEP_ANTI);
3341 INSN_REF_COUNT (insn)++;
3342 }
3343 last = insn;
3344
3345 /* Skip over insns that are part of a group. */
3346 while (SCHED_GROUP_P (insn))
3347 {
3348 insn = prev_nonnote_insn (insn);
3349 priority (insn);
3350 }
3351 }
3352
3353 insn = PREV_INSN (insn);
3354 /* Don't overrun the bounds of the basic block. */
3355 if (insn == prev_head)
3356 break;
3357 }
3358
3359 /* Assign priorities to instructions. Also check whether they
3360 are in priority order already. If so then I will be nonnegative.
3361 We use this shortcut only before reloading. */
3362 #if 0
3363 i = reload_completed ? DONE_PRIORITY : MAX_PRIORITY;
3364 #endif
3365
3366 for (; insn != prev_head; insn = PREV_INSN (insn))
3367 {
3368 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3369 {
3370 priority (insn);
3371 if (INSN_REF_COUNT (insn) == 0)
3372 {
3373 if (last == 0)
3374 ready[n_ready++] = insn;
3375 else
3376 {
3377 /* Make this dependent on the last of the instructions
3378 that must remain in order at the end of the block. */
3379 add_dependence (last, insn, REG_DEP_ANTI);
3380 INSN_REF_COUNT (insn) = 1;
3381 }
3382 }
3383 if (SCHED_GROUP_P (insn))
3384 {
3385 while (SCHED_GROUP_P (insn))
3386 {
3387 insn = prev_nonnote_insn (insn);
3388 priority (insn);
3389 }
3390 continue;
3391 }
3392 #if 0
3393 if (i < 0)
3394 continue;
3395 if (INSN_PRIORITY (insn) < i)
3396 i = INSN_PRIORITY (insn);
3397 else if (INSN_PRIORITY (insn) > i)
3398 i = DONE_PRIORITY;
3399 #endif
3400 }
3401 }
3402
3403 #if 0
3404 /* This short-cut doesn't work. It does not count call insns crossed by
3405 registers in reg_sometimes_live. It does not mark these registers as
3406 dead if they die in this block. It does not mark these registers live
3407 (or create new reg_sometimes_live entries if necessary) if they are born
3408 in this block.
3409
3410 The easy solution is to just always schedule a block. These blocks tend
3411 to be very short, so this doesn't slow down this pass by much. */
3412
3413 /* If existing order is good, don't bother to reorder. */
3414 if (i != DONE_PRIORITY)
3415 {
3416 if (file)
3417 fprintf (file, ";; already scheduled\n");
3418
3419 if (reload_completed == 0)
3420 {
3421 for (i = 0; i < sometimes_max; i++)
3422 regs_sometimes_live[i].live_length += n_insns;
3423
3424 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3425 }
3426 free_pending_lists ();
3427 return;
3428 }
3429 #endif
3430
3431 /* Scan all the insns to be scheduled, removing NOTE insns
3432 and register death notes.
3433 Line number NOTE insns end up in NOTE_LIST.
3434 Register death notes end up in DEAD_NOTES.
3435
3436 Recreate the register life information for the end of this basic
3437 block. */
3438
3439 if (reload_completed == 0)
3440 {
3441 bcopy ((char *) basic_block_live_at_start[b], (char *) bb_live_regs,
3442 regset_bytes);
3443 bzero ((char *) bb_dead_regs, regset_bytes);
3444
3445 if (b == 0)
3446 {
3447 /* This is the first block in the function. There may be insns
3448 before head that we can't schedule. We still need to examine
3449 them though for accurate register lifetime analysis. */
3450
3451 /* We don't want to remove any REG_DEAD notes as the code below
3452 does. */
3453
3454 for (insn = basic_block_head[b]; insn != head;
3455 insn = NEXT_INSN (insn))
3456 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3457 {
3458 /* See if the register gets born here. */
3459 /* We must check for registers being born before we check for
3460 registers dying. It is possible for a register to be born
3461 and die in the same insn, e.g. reading from a volatile
3462 memory location into an otherwise unused register. Such
3463 a register must be marked as dead after this insn. */
3464 if (GET_CODE (PATTERN (insn)) == SET
3465 || GET_CODE (PATTERN (insn)) == CLOBBER)
3466 sched_note_set (b, PATTERN (insn), 0);
3467 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3468 {
3469 int j;
3470 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3471 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3472 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3473 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3474
3475 /* ??? This code is obsolete and should be deleted. It
3476 is harmless though, so we will leave it in for now. */
3477 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3478 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3479 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3480 }
3481
3482 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
3483 {
3484 if ((REG_NOTE_KIND (link) == REG_DEAD
3485 || REG_NOTE_KIND (link) == REG_UNUSED)
3486 /* Verify that the REG_NOTE has a valid value. */
3487 && GET_CODE (XEXP (link, 0)) == REG)
3488 {
3489 register int regno = REGNO (XEXP (link, 0));
3490 register int offset = regno / REGSET_ELT_BITS;
3491 register REGSET_ELT_TYPE bit
3492 = (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
3493
3494 if (regno < FIRST_PSEUDO_REGISTER)
3495 {
3496 int j = HARD_REGNO_NREGS (regno,
3497 GET_MODE (XEXP (link, 0)));
3498 while (--j >= 0)
3499 {
3500 offset = (regno + j) / REGSET_ELT_BITS;
3501 bit = ((REGSET_ELT_TYPE) 1
3502 << ((regno + j) % REGSET_ELT_BITS));
3503
3504 bb_live_regs[offset] &= ~bit;
3505 bb_dead_regs[offset] |= bit;
3506 }
3507 }
3508 else
3509 {
3510 bb_live_regs[offset] &= ~bit;
3511 bb_dead_regs[offset] |= bit;
3512 }
3513 }
3514 }
3515 }
3516 }
3517 }
3518
3519 /* If debugging information is being produced, keep track of the line
3520 number notes for each insn. */
3521 if (write_symbols != NO_DEBUG)
3522 {
3523 /* We must use the true line number for the first insn in the block
3524 that was computed and saved at the start of this pass. We can't
3525 use the current line number, because scheduling of the previous
3526 block may have changed the current line number. */
3527 rtx line = line_note_head[b];
3528
3529 for (insn = basic_block_head[b];
3530 insn != next_tail;
3531 insn = NEXT_INSN (insn))
3532 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3533 line = insn;
3534 else
3535 LINE_NOTE (insn) = line;
3536 }
3537
3538 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3539 {
3540 rtx prev, next, link;
3541
3542 /* Farm out notes. This is needed to keep the debugger from
3543 getting completely deranged. */
3544 if (GET_CODE (insn) == NOTE)
3545 {
3546 prev = insn;
3547 insn = unlink_notes (insn, next_tail);
3548 if (prev == tail)
3549 abort ();
3550 if (prev == head)
3551 abort ();
3552 if (insn == next_tail)
3553 abort ();
3554 }
3555
3556 if (reload_completed == 0
3557 && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3558 {
3559 /* See if the register gets born here. */
3560 /* We must check for registers being born before we check for
3561 registers dying. It is possible for a register to be born and
3562 die in the same insn, e.g. reading from a volatile memory
3563 location into an otherwise unused register. Such a register
3564 must be marked as dead after this insn. */
3565 if (GET_CODE (PATTERN (insn)) == SET
3566 || GET_CODE (PATTERN (insn)) == CLOBBER)
3567 sched_note_set (b, PATTERN (insn), 0);
3568 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3569 {
3570 int j;
3571 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3572 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3573 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3574 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3575
3576 /* ??? This code is obsolete and should be deleted. It
3577 is harmless though, so we will leave it in for now. */
3578 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3579 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3580 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3581 }
3582
3583 /* Need to know what registers this insn kills. */
3584 for (prev = 0, link = REG_NOTES (insn); link; link = next)
3585 {
3586 next = XEXP (link, 1);
3587 if ((REG_NOTE_KIND (link) == REG_DEAD
3588 || REG_NOTE_KIND (link) == REG_UNUSED)
3589 /* Verify that the REG_NOTE has a valid value. */
3590 && GET_CODE (XEXP (link, 0)) == REG)
3591 {
3592 register int regno = REGNO (XEXP (link, 0));
3593 register int offset = regno / REGSET_ELT_BITS;
3594 register REGSET_ELT_TYPE bit
3595 = (REGSET_ELT_TYPE) 1 << (regno % REGSET_ELT_BITS);
3596
3597 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
3598 alone. */
3599 if (REG_NOTE_KIND (link) == REG_DEAD)
3600 {
3601 if (prev)
3602 XEXP (prev, 1) = next;
3603 else
3604 REG_NOTES (insn) = next;
3605 XEXP (link, 1) = dead_notes;
3606 dead_notes = link;
3607 }
3608 else
3609 prev = link;
3610
3611 if (regno < FIRST_PSEUDO_REGISTER)
3612 {
3613 int j = HARD_REGNO_NREGS (regno,
3614 GET_MODE (XEXP (link, 0)));
3615 while (--j >= 0)
3616 {
3617 offset = (regno + j) / REGSET_ELT_BITS;
3618 bit = ((REGSET_ELT_TYPE) 1
3619 << ((regno + j) % REGSET_ELT_BITS));
3620
3621 bb_live_regs[offset] &= ~bit;
3622 bb_dead_regs[offset] |= bit;
3623 }
3624 }
3625 else
3626 {
3627 bb_live_regs[offset] &= ~bit;
3628 bb_dead_regs[offset] |= bit;
3629 }
3630 }
3631 else
3632 prev = link;
3633 }
3634 }
3635 }
3636
3637 if (reload_completed == 0)
3638 {
3639 /* Keep track of register lives. */
3640 old_live_regs = (regset) alloca (regset_bytes);
3641 regs_sometimes_live
3642 = (struct sometimes *) alloca (max_regno * sizeof (struct sometimes));
3643 sometimes_max = 0;
3644
3645 /* Start with registers live at end. */
3646 for (j = 0; j < regset_size; j++)
3647 {
3648 REGSET_ELT_TYPE live = bb_live_regs[j];
3649 old_live_regs[j] = live;
3650 if (live)
3651 {
3652 register int bit;
3653 for (bit = 0; bit < REGSET_ELT_BITS; bit++)
3654 if (live & ((REGSET_ELT_TYPE) 1 << bit))
3655 sometimes_max = new_sometimes_live (regs_sometimes_live, j,
3656 bit, sometimes_max);
3657 }
3658 }
3659 }
3660
3661 SCHED_SORT (ready, n_ready, 1);
3662
3663 if (file)
3664 {
3665 fprintf (file, ";; ready list initially:\n;; ");
3666 for (i = 0; i < n_ready; i++)
3667 fprintf (file, "%d ", INSN_UID (ready[i]));
3668 fprintf (file, "\n\n");
3669
3670 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3671 if (INSN_PRIORITY (insn) > 0)
3672 fprintf (file, ";; insn[%4d]: priority = %4d, ref_count = %4d\n",
3673 INSN_UID (insn), INSN_PRIORITY (insn),
3674 INSN_REF_COUNT (insn));
3675 }
3676
3677 /* Now HEAD and TAIL are going to become disconnected
3678 entirely from the insn chain. */
3679 tail = 0;
3680
3681 /* Q_SIZE will always be zero here. */
3682 q_ptr = 0; clock = 0;
3683 bzero ((char *) insn_queue, sizeof (insn_queue));
3684
3685 /* Now, perform list scheduling. */
3686
3687 /* Where we start inserting insns is after TAIL. */
3688 last = next_tail;
3689
3690 new_needs = (NEXT_INSN (prev_head) == basic_block_head[b]
3691 ? NEED_HEAD : NEED_NOTHING);
3692 if (PREV_INSN (next_tail) == basic_block_end[b])
3693 new_needs |= NEED_TAIL;
3694
3695 new_ready = n_ready;
3696 while (sched_n_insns < n_insns)
3697 {
3698 q_ptr = NEXT_Q (q_ptr); clock++;
3699
3700 /* Add all pending insns that can be scheduled without stalls to the
3701 ready list. */
3702 for (insn = insn_queue[q_ptr]; insn; insn = NEXT_INSN (insn))
3703 {
3704 if (file)
3705 fprintf (file, ";; launching %d before %d with no stalls at T-%d\n",
3706 INSN_UID (insn), INSN_UID (last), clock);
3707 ready[new_ready++] = insn;
3708 q_size -= 1;
3709 }
3710 insn_queue[q_ptr] = 0;
3711
3712 /* If there are no ready insns, stall until one is ready and add all
3713 of the pending insns at that point to the ready list. */
3714 if (new_ready == 0)
3715 {
3716 register int stalls;
3717
3718 for (stalls = 1; stalls < INSN_QUEUE_SIZE; stalls++)
3719 if (insn = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)])
3720 {
3721 for (; insn; insn = NEXT_INSN (insn))
3722 {
3723 if (file)
3724 fprintf (file, ";; launching %d before %d with %d stalls at T-%d\n",
3725 INSN_UID (insn), INSN_UID (last), stalls, clock);
3726 ready[new_ready++] = insn;
3727 q_size -= 1;
3728 }
3729 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
3730 break;
3731 }
3732
3733 q_ptr = NEXT_Q_AFTER (q_ptr, stalls); clock += stalls;
3734 }
3735
3736 /* There should be some instructions waiting to fire. */
3737 if (new_ready == 0)
3738 abort ();
3739
3740 if (file)
3741 {
3742 fprintf (file, ";; ready list at T-%d:", clock);
3743 for (i = 0; i < new_ready; i++)
3744 fprintf (file, " %d (%x)",
3745 INSN_UID (ready[i]), INSN_PRIORITY (ready[i]));
3746 }
3747
3748 /* Sort the ready list and choose the best insn to schedule. Select
3749 which insn should issue in this cycle and queue those that are
3750 blocked by function unit hazards.
3751
3752 N_READY holds the number of items that were scheduled the last time,
3753 minus the one instruction scheduled on the last loop iteration; it
3754 is not modified for any other reason in this loop. */
3755
3756 SCHED_SORT (ready, new_ready, n_ready);
3757 if (MAX_BLOCKAGE > 1)
3758 {
3759 new_ready = schedule_select (ready, new_ready, clock, file);
3760 if (new_ready == 0)
3761 {
3762 if (file)
3763 fprintf (file, "\n");
3764 /* We must set n_ready here, to ensure that sorting always
3765 occurs when we come back to the SCHED_SORT line above. */
3766 n_ready = 0;
3767 continue;
3768 }
3769 }
3770 n_ready = new_ready;
3771 last_scheduled_insn = insn = ready[0];
3772
3773 /* The first insn scheduled becomes the new tail. */
3774 if (tail == 0)
3775 tail = insn;
3776
3777 if (file)
3778 {
3779 fprintf (file, ", now");
3780 for (i = 0; i < n_ready; i++)
3781 fprintf (file, " %d", INSN_UID (ready[i]));
3782 fprintf (file, "\n");
3783 }
3784
3785 if (DONE_PRIORITY_P (insn))
3786 abort ();
3787
3788 if (reload_completed == 0)
3789 {
3790 /* Process this insn, and each insn linked to this one which must
3791 be immediately output after this insn. */
3792 do
3793 {
3794 /* First we kill registers set by this insn, and then we
3795 make registers used by this insn live. This is the opposite
3796 order used above because we are traversing the instructions
3797 backwards. */
3798
3799 /* Strictly speaking, we should scan REG_UNUSED notes and make
3800 every register mentioned there live, however, we will just
3801 kill them again immediately below, so there doesn't seem to
3802 be any reason why we bother to do this. */
3803
3804 /* See if this is the last notice we must take of a register. */
3805 if (GET_CODE (PATTERN (insn)) == SET
3806 || GET_CODE (PATTERN (insn)) == CLOBBER)
3807 sched_note_set (b, PATTERN (insn), 1);
3808 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3809 {
3810 int j;
3811 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3812 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3813 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3814 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 1);
3815 }
3816
3817 /* This code keeps life analysis information up to date. */
3818 if (GET_CODE (insn) == CALL_INSN)
3819 {
3820 register struct sometimes *p;
3821
3822 /* A call kills all call used and global registers, except
3823 for those mentioned in the call pattern which will be
3824 made live again later. */
3825 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
3826 if ((call_used_regs[i] && ! fixed_regs[i])
3827 || global_regs[i])
3828 {
3829 register int offset = i / REGSET_ELT_BITS;
3830 register REGSET_ELT_TYPE bit
3831 = (REGSET_ELT_TYPE) 1 << (i % REGSET_ELT_BITS);
3832
3833 bb_live_regs[offset] &= ~bit;
3834 bb_dead_regs[offset] |= bit;
3835 }
3836
3837 /* Regs live at the time of a call instruction must not
3838 go in a register clobbered by calls. Record this for
3839 all regs now live. Note that insns which are born or
3840 die in a call do not cross a call, so this must be done
3841 after the killings (above) and before the births
3842 (below). */
3843 p = regs_sometimes_live;
3844 for (i = 0; i < sometimes_max; i++, p++)
3845 if (bb_live_regs[p->offset]
3846 & ((REGSET_ELT_TYPE) 1 << p->bit))
3847 p->calls_crossed += 1;
3848 }
3849
3850 /* Make every register used live, and add REG_DEAD notes for
3851 registers which were not live before we started. */
3852 attach_deaths_insn (insn);
3853
3854 /* Find registers now made live by that instruction. */
3855 for (i = 0; i < regset_size; i++)
3856 {
3857 REGSET_ELT_TYPE diff = bb_live_regs[i] & ~old_live_regs[i];
3858 if (diff)
3859 {
3860 register int bit;
3861 old_live_regs[i] |= diff;
3862 for (bit = 0; bit < REGSET_ELT_BITS; bit++)
3863 if (diff & ((REGSET_ELT_TYPE) 1 << bit))
3864 sometimes_max
3865 = new_sometimes_live (regs_sometimes_live, i, bit,
3866 sometimes_max);
3867 }
3868 }
3869
3870 /* Count lengths of all regs we are worrying about now,
3871 and handle registers no longer live. */
3872
3873 for (i = 0; i < sometimes_max; i++)
3874 {
3875 register struct sometimes *p = &regs_sometimes_live[i];
3876 int regno = p->offset*REGSET_ELT_BITS + p->bit;
3877
3878 p->live_length += 1;
3879
3880 if ((bb_live_regs[p->offset]
3881 & ((REGSET_ELT_TYPE) 1 << p->bit)) == 0)
3882 {
3883 /* This is the end of one of this register's lifetime
3884 segments. Save the lifetime info collected so far,
3885 and clear its bit in the old_live_regs entry. */
3886 sched_reg_live_length[regno] += p->live_length;
3887 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3888 old_live_regs[p->offset]
3889 &= ~((REGSET_ELT_TYPE) 1 << p->bit);
3890
3891 /* Delete the reg_sometimes_live entry for this reg by
3892 copying the last entry over top of it. */
3893 *p = regs_sometimes_live[--sometimes_max];
3894 /* ...and decrement i so that this newly copied entry
3895 will be processed. */
3896 i--;
3897 }
3898 }
3899
3900 link = insn;
3901 insn = PREV_INSN (insn);
3902 }
3903 while (SCHED_GROUP_P (link));
3904
3905 /* Set INSN back to the insn we are scheduling now. */
3906 insn = ready[0];
3907 }
3908
3909 /* Schedule INSN. Remove it from the ready list. */
3910 ready += 1;
3911 n_ready -= 1;
3912
3913 sched_n_insns += 1;
3914 NEXT_INSN (insn) = last;
3915 PREV_INSN (last) = insn;
3916
3917 /* Everything that precedes INSN now either becomes "ready", if
3918 it can execute immediately before INSN, or "pending", if
3919 there must be a delay. Give INSN high enough priority that
3920 at least one (maybe more) reg-killing insns can be launched
3921 ahead of all others. Mark INSN as scheduled by changing its
3922 priority to -1. */
3923 INSN_PRIORITY (insn) = LAUNCH_PRIORITY;
3924 new_ready = schedule_insn (insn, ready, n_ready, clock);
3925 INSN_PRIORITY (insn) = DONE_PRIORITY;
3926
3927 /* Schedule all prior insns that must not be moved. */
3928 if (SCHED_GROUP_P (insn))
3929 {
3930 /* Disable these insns from being launched, in case one of the
3931 insns in the group has a dependency on an earlier one. */
3932 link = insn;
3933 while (SCHED_GROUP_P (link))
3934 {
3935 /* Disable these insns from being launched by anybody. */
3936 link = PREV_INSN (link);
3937 INSN_REF_COUNT (link) = 0;
3938 }
3939
3940 /* Now handle each group insn like the main insn was handled
3941 above. */
3942 link = insn;
3943 while (SCHED_GROUP_P (link))
3944 {
3945 link = PREV_INSN (link);
3946
3947 sched_n_insns += 1;
3948
3949 /* ??? Why don't we set LAUNCH_PRIORITY here? */
3950 new_ready = schedule_insn (link, ready, new_ready, clock);
3951 INSN_PRIORITY (link) = DONE_PRIORITY;
3952 }
3953 }
3954
3955 /* Put back NOTE_INSN_SETJMP, NOTE_INSN_LOOP_BEGIN, and
3956 NOTE_INSN_LOOP_END notes. */
3957
3958 /* To prime the loop. We need to handle INSN and all the insns in the
3959 sched group. */
3960 last = NEXT_INSN (insn);
3961 do
3962 {
3963 insn = PREV_INSN (last);
3964
3965 /* Maintain a valid chain so emit_note_before works.
3966 This is necessary because PREV_INSN (insn) isn't valid
3967 (if ! SCHED_GROUP_P) and if it points to an insn already
3968 scheduled, a circularity will result. */
3969 if (! SCHED_GROUP_P (insn))
3970 {
3971 NEXT_INSN (prev_head) = insn;
3972 PREV_INSN (insn) = prev_head;
3973 }
3974
3975 last = reemit_notes (insn, insn);
3976 }
3977 while (SCHED_GROUP_P (insn));
3978 }
3979 if (q_size != 0)
3980 abort ();
3981
3982 if (reload_completed == 0)
3983 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3984
3985 /* HEAD is now the first insn in the chain of insns that
3986 been scheduled by the loop above.
3987 TAIL is the last of those insns. */
3988 head = last;
3989
3990 /* NOTE_LIST is the end of a chain of notes previously found
3991 among the insns. Insert them at the beginning of the insns. */
3992 if (note_list != 0)
3993 {
3994 rtx note_head = note_list;
3995 while (PREV_INSN (note_head))
3996 note_head = PREV_INSN (note_head);
3997
3998 PREV_INSN (head) = note_list;
3999 NEXT_INSN (note_list) = head;
4000 head = note_head;
4001 }
4002
4003 /* There should be no REG_DEAD notes leftover at the end.
4004 In practice, this can occur as the result of bugs in flow, combine.c,
4005 and/or sched.c. The values of the REG_DEAD notes remaining are
4006 meaningless, because dead_notes is just used as a free list. */
4007 #if 1
4008 if (dead_notes != 0)
4009 abort ();
4010 #endif
4011
4012 if (new_needs & NEED_HEAD)
4013 basic_block_head[b] = head;
4014 PREV_INSN (head) = prev_head;
4015 NEXT_INSN (prev_head) = head;
4016
4017 if (new_needs & NEED_TAIL)
4018 basic_block_end[b] = tail;
4019 NEXT_INSN (tail) = next_tail;
4020 PREV_INSN (next_tail) = tail;
4021
4022 /* Restore the line-number notes of each insn. */
4023 if (write_symbols != NO_DEBUG)
4024 {
4025 rtx line, note, prev, new;
4026 int notes = 0;
4027
4028 head = basic_block_head[b];
4029 next_tail = NEXT_INSN (basic_block_end[b]);
4030
4031 /* Determine the current line-number. We want to know the current
4032 line number of the first insn of the block here, in case it is
4033 different from the true line number that was saved earlier. If
4034 different, then we need a line number note before the first insn
4035 of this block. If it happens to be the same, then we don't want to
4036 emit another line number note here. */
4037 for (line = head; line; line = PREV_INSN (line))
4038 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4039 break;
4040
4041 /* Walk the insns keeping track of the current line-number and inserting
4042 the line-number notes as needed. */
4043 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
4044 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4045 line = insn;
4046 /* This used to emit line number notes before every non-deleted note.
4047 However, this confuses a debugger, because line notes not separated
4048 by real instructions all end up at the same address. I can find no
4049 use for line number notes before other notes, so none are emitted. */
4050 else if (GET_CODE (insn) != NOTE
4051 && (note = LINE_NOTE (insn)) != 0
4052 && note != line
4053 && (line == 0
4054 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
4055 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)))
4056 {
4057 line = note;
4058 prev = PREV_INSN (insn);
4059 if (LINE_NOTE (note))
4060 {
4061 /* Re-use the original line-number note. */
4062 LINE_NOTE (note) = 0;
4063 PREV_INSN (note) = prev;
4064 NEXT_INSN (prev) = note;
4065 PREV_INSN (insn) = note;
4066 NEXT_INSN (note) = insn;
4067 }
4068 else
4069 {
4070 notes++;
4071 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
4072 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
4073 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note);
4074 }
4075 }
4076 if (file && notes)
4077 fprintf (file, ";; added %d line-number notes\n", notes);
4078 }
4079
4080 if (file)
4081 {
4082 fprintf (file, ";; total time = %d\n;; new basic block head = %d\n;; new basic block end = %d\n\n",
4083 clock, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
4084 }
4085
4086 /* Yow! We're done! */
4087 free_pending_lists ();
4088
4089 return;
4090 }
4091 \f
4092 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
4093 REGNO, returning the rtx of the reference found if any. Otherwise,
4094 returns 0. */
4095
4096 static rtx
4097 regno_use_in (regno, x)
4098 int regno;
4099 rtx x;
4100 {
4101 register char *fmt;
4102 int i, j;
4103 rtx tem;
4104
4105 if (GET_CODE (x) == REG && REGNO (x) == regno)
4106 return x;
4107
4108 fmt = GET_RTX_FORMAT (GET_CODE (x));
4109 for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
4110 {
4111 if (fmt[i] == 'e')
4112 {
4113 if (tem = regno_use_in (regno, XEXP (x, i)))
4114 return tem;
4115 }
4116 else if (fmt[i] == 'E')
4117 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
4118 if (tem = regno_use_in (regno , XVECEXP (x, i, j)))
4119 return tem;
4120 }
4121
4122 return 0;
4123 }
4124
4125 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
4126 needed for the hard register mentioned in the note. This can happen
4127 if the reference to the hard register in the original insn was split into
4128 several smaller hard register references in the split insns. */
4129
4130 static void
4131 split_hard_reg_notes (note, first, last, orig_insn)
4132 rtx note, first, last, orig_insn;
4133 {
4134 rtx reg, temp, link;
4135 int n_regs, i, new_reg;
4136 rtx insn;
4137
4138 /* Assume that this is a REG_DEAD note. */
4139 if (REG_NOTE_KIND (note) != REG_DEAD)
4140 abort ();
4141
4142 reg = XEXP (note, 0);
4143
4144 n_regs = HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg));
4145
4146 for (i = 0; i < n_regs; i++)
4147 {
4148 new_reg = REGNO (reg) + i;
4149
4150 /* Check for references to new_reg in the split insns. */
4151 for (insn = last; ; insn = PREV_INSN (insn))
4152 {
4153 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4154 && (temp = regno_use_in (new_reg, PATTERN (insn))))
4155 {
4156 /* Create a new reg dead note here. */
4157 link = rtx_alloc (EXPR_LIST);
4158 PUT_REG_NOTE_KIND (link, REG_DEAD);
4159 XEXP (link, 0) = temp;
4160 XEXP (link, 1) = REG_NOTES (insn);
4161 REG_NOTES (insn) = link;
4162
4163 /* If killed multiple registers here, then add in the excess. */
4164 i += HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) - 1;
4165
4166 break;
4167 }
4168 /* It isn't mentioned anywhere, so no new reg note is needed for
4169 this register. */
4170 if (insn == first)
4171 break;
4172 }
4173 }
4174 }
4175
4176 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
4177 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
4178
4179 static void
4180 new_insn_dead_notes (pat, insn, last, orig_insn)
4181 rtx pat, insn, last, orig_insn;
4182 {
4183 rtx dest, tem, set;
4184
4185 /* PAT is either a CLOBBER or a SET here. */
4186 dest = XEXP (pat, 0);
4187
4188 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4189 || GET_CODE (dest) == STRICT_LOW_PART
4190 || GET_CODE (dest) == SIGN_EXTRACT)
4191 dest = XEXP (dest, 0);
4192
4193 if (GET_CODE (dest) == REG)
4194 {
4195 for (tem = last; tem != insn; tem = PREV_INSN (tem))
4196 {
4197 if (GET_RTX_CLASS (GET_CODE (tem)) == 'i'
4198 && reg_overlap_mentioned_p (dest, PATTERN (tem))
4199 && (set = single_set (tem)))
4200 {
4201 rtx tem_dest = SET_DEST (set);
4202
4203 while (GET_CODE (tem_dest) == ZERO_EXTRACT
4204 || GET_CODE (tem_dest) == SUBREG
4205 || GET_CODE (tem_dest) == STRICT_LOW_PART
4206 || GET_CODE (tem_dest) == SIGN_EXTRACT)
4207 tem_dest = XEXP (tem_dest, 0);
4208
4209 if (! rtx_equal_p (tem_dest, dest))
4210 {
4211 /* Use the same scheme as combine.c, don't put both REG_DEAD
4212 and REG_UNUSED notes on the same insn. */
4213 if (! find_regno_note (tem, REG_UNUSED, REGNO (dest))
4214 && ! find_regno_note (tem, REG_DEAD, REGNO (dest)))
4215 {
4216 rtx note = rtx_alloc (EXPR_LIST);
4217 PUT_REG_NOTE_KIND (note, REG_DEAD);
4218 XEXP (note, 0) = dest;
4219 XEXP (note, 1) = REG_NOTES (tem);
4220 REG_NOTES (tem) = note;
4221 }
4222 /* The reg only dies in one insn, the last one that uses
4223 it. */
4224 break;
4225 }
4226 else if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4227 /* We found an instruction that both uses the register,
4228 and sets it, so no new REG_NOTE is needed for this set. */
4229 break;
4230 }
4231 }
4232 /* If this is a set, it must die somewhere, unless it is the dest of
4233 the original insn, and hence is live after the original insn. Abort
4234 if it isn't supposed to be live after the original insn.
4235
4236 If this is a clobber, then just add a REG_UNUSED note. */
4237 if (tem == insn)
4238 {
4239 int live_after_orig_insn = 0;
4240 rtx pattern = PATTERN (orig_insn);
4241 int i;
4242
4243 if (GET_CODE (pat) == CLOBBER)
4244 {
4245 rtx note = rtx_alloc (EXPR_LIST);
4246 PUT_REG_NOTE_KIND (note, REG_UNUSED);
4247 XEXP (note, 0) = dest;
4248 XEXP (note, 1) = REG_NOTES (insn);
4249 REG_NOTES (insn) = note;
4250 return;
4251 }
4252
4253 /* The original insn could have multiple sets, so search the
4254 insn for all sets. */
4255 if (GET_CODE (pattern) == SET)
4256 {
4257 if (reg_overlap_mentioned_p (dest, SET_DEST (pattern)))
4258 live_after_orig_insn = 1;
4259 }
4260 else if (GET_CODE (pattern) == PARALLEL)
4261 {
4262 for (i = 0; i < XVECLEN (pattern, 0); i++)
4263 if (GET_CODE (XVECEXP (pattern, 0, i)) == SET
4264 && reg_overlap_mentioned_p (dest,
4265 SET_DEST (XVECEXP (pattern,
4266 0, i))))
4267 live_after_orig_insn = 1;
4268 }
4269
4270 if (! live_after_orig_insn)
4271 abort ();
4272 }
4273 }
4274 }
4275
4276 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
4277 registers modified by X. INC is -1 if the containing insn is being deleted,
4278 and is 1 if the containing insn is a newly generated insn. */
4279
4280 static void
4281 update_n_sets (x, inc)
4282 rtx x;
4283 int inc;
4284 {
4285 rtx dest = SET_DEST (x);
4286
4287 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
4288 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
4289 dest = SUBREG_REG (dest);
4290
4291 if (GET_CODE (dest) == REG)
4292 {
4293 int regno = REGNO (dest);
4294
4295 if (regno < FIRST_PSEUDO_REGISTER)
4296 {
4297 register int i;
4298 int endregno = regno + HARD_REGNO_NREGS (regno, GET_MODE (dest));
4299
4300 for (i = regno; i < endregno; i++)
4301 reg_n_sets[i] += inc;
4302 }
4303 else
4304 reg_n_sets[regno] += inc;
4305 }
4306 }
4307
4308 /* Updates all flow-analysis related quantities (including REG_NOTES) for
4309 the insns from FIRST to LAST inclusive that were created by splitting
4310 ORIG_INSN. NOTES are the original REG_NOTES. */
4311
4312 static void
4313 update_flow_info (notes, first, last, orig_insn)
4314 rtx notes;
4315 rtx first, last;
4316 rtx orig_insn;
4317 {
4318 rtx insn, note;
4319 rtx next;
4320 rtx orig_dest, temp;
4321 rtx set;
4322
4323 /* Get and save the destination set by the original insn. */
4324
4325 orig_dest = single_set (orig_insn);
4326 if (orig_dest)
4327 orig_dest = SET_DEST (orig_dest);
4328
4329 /* Move REG_NOTES from the original insn to where they now belong. */
4330
4331 for (note = notes; note; note = next)
4332 {
4333 next = XEXP (note, 1);
4334 switch (REG_NOTE_KIND (note))
4335 {
4336 case REG_DEAD:
4337 case REG_UNUSED:
4338 /* Move these notes from the original insn to the last new insn where
4339 the register is now set. */
4340
4341 for (insn = last; ; insn = PREV_INSN (insn))
4342 {
4343 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4344 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4345 {
4346 /* If this note refers to a multiple word hard register, it
4347 may have been split into several smaller hard register
4348 references, so handle it specially. */
4349 temp = XEXP (note, 0);
4350 if (REG_NOTE_KIND (note) == REG_DEAD
4351 && GET_CODE (temp) == REG
4352 && REGNO (temp) < FIRST_PSEUDO_REGISTER
4353 && HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) > 1)
4354 split_hard_reg_notes (note, first, last, orig_insn);
4355 else
4356 {
4357 XEXP (note, 1) = REG_NOTES (insn);
4358 REG_NOTES (insn) = note;
4359 }
4360
4361 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
4362 notes. */
4363 /* ??? This won't handle multiple word registers correctly,
4364 but should be good enough for now. */
4365 if (REG_NOTE_KIND (note) == REG_UNUSED
4366 && ! dead_or_set_p (insn, XEXP (note, 0)))
4367 PUT_REG_NOTE_KIND (note, REG_DEAD);
4368
4369 /* The reg only dies in one insn, the last one that uses
4370 it. */
4371 break;
4372 }
4373 /* It must die somewhere, fail it we couldn't find where it died.
4374
4375 If this is a REG_UNUSED note, then it must be a temporary
4376 register that was not needed by this instantiation of the
4377 pattern, so we can safely ignore it. */
4378 if (insn == first)
4379 {
4380 if (REG_NOTE_KIND (note) != REG_UNUSED)
4381 abort ();
4382
4383 break;
4384 }
4385 }
4386 break;
4387
4388 case REG_WAS_0:
4389 /* This note applies to the dest of the original insn. Find the
4390 first new insn that now has the same dest, and move the note
4391 there. */
4392
4393 if (! orig_dest)
4394 abort ();
4395
4396 for (insn = first; ; insn = NEXT_INSN (insn))
4397 {
4398 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4399 && (temp = single_set (insn))
4400 && rtx_equal_p (SET_DEST (temp), orig_dest))
4401 {
4402 XEXP (note, 1) = REG_NOTES (insn);
4403 REG_NOTES (insn) = note;
4404 /* The reg is only zero before one insn, the first that
4405 uses it. */
4406 break;
4407 }
4408 /* It must be set somewhere, fail if we couldn't find where it
4409 was set. */
4410 if (insn == last)
4411 abort ();
4412 }
4413 break;
4414
4415 case REG_EQUAL:
4416 case REG_EQUIV:
4417 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
4418 set is meaningless. Just drop the note. */
4419 if (! orig_dest)
4420 break;
4421
4422 case REG_NO_CONFLICT:
4423 /* These notes apply to the dest of the original insn. Find the last
4424 new insn that now has the same dest, and move the note there. */
4425
4426 if (! orig_dest)
4427 abort ();
4428
4429 for (insn = last; ; insn = PREV_INSN (insn))
4430 {
4431 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4432 && (temp = single_set (insn))
4433 && rtx_equal_p (SET_DEST (temp), orig_dest))
4434 {
4435 XEXP (note, 1) = REG_NOTES (insn);
4436 REG_NOTES (insn) = note;
4437 /* Only put this note on one of the new insns. */
4438 break;
4439 }
4440
4441 /* The original dest must still be set someplace. Abort if we
4442 couldn't find it. */
4443 if (insn == first)
4444 abort ();
4445 }
4446 break;
4447
4448 case REG_LIBCALL:
4449 /* Move a REG_LIBCALL note to the first insn created, and update
4450 the corresponding REG_RETVAL note. */
4451 XEXP (note, 1) = REG_NOTES (first);
4452 REG_NOTES (first) = note;
4453
4454 insn = XEXP (note, 0);
4455 note = find_reg_note (insn, REG_RETVAL, NULL_RTX);
4456 if (note)
4457 XEXP (note, 0) = first;
4458 break;
4459
4460 case REG_RETVAL:
4461 /* Move a REG_RETVAL note to the last insn created, and update
4462 the corresponding REG_LIBCALL note. */
4463 XEXP (note, 1) = REG_NOTES (last);
4464 REG_NOTES (last) = note;
4465
4466 insn = XEXP (note, 0);
4467 note = find_reg_note (insn, REG_LIBCALL, NULL_RTX);
4468 if (note)
4469 XEXP (note, 0) = last;
4470 break;
4471
4472 case REG_NONNEG:
4473 /* This should be moved to whichever instruction is a JUMP_INSN. */
4474
4475 for (insn = last; ; insn = PREV_INSN (insn))
4476 {
4477 if (GET_CODE (insn) == JUMP_INSN)
4478 {
4479 XEXP (note, 1) = REG_NOTES (insn);
4480 REG_NOTES (insn) = note;
4481 /* Only put this note on one of the new insns. */
4482 break;
4483 }
4484 /* Fail if we couldn't find a JUMP_INSN. */
4485 if (insn == first)
4486 abort ();
4487 }
4488 break;
4489
4490 case REG_INC:
4491 /* This should be moved to whichever instruction now has the
4492 increment operation. */
4493 abort ();
4494
4495 case REG_LABEL:
4496 /* Should be moved to the new insn(s) which use the label. */
4497 for (insn = first; insn != NEXT_INSN (last); insn = NEXT_INSN (insn))
4498 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4499 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4500 REG_NOTES (insn) = gen_rtx (EXPR_LIST, REG_LABEL,
4501 XEXP (note, 0), REG_NOTES (insn));
4502 break;
4503
4504 case REG_CC_SETTER:
4505 case REG_CC_USER:
4506 /* These two notes will never appear until after reorg, so we don't
4507 have to handle them here. */
4508 default:
4509 abort ();
4510 }
4511 }
4512
4513 /* Each new insn created, except the last, has a new set. If the destination
4514 is a register, then this reg is now live across several insns, whereas
4515 previously the dest reg was born and died within the same insn. To
4516 reflect this, we now need a REG_DEAD note on the insn where this
4517 dest reg dies.
4518
4519 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
4520
4521 for (insn = first; insn != last; insn = NEXT_INSN (insn))
4522 {
4523 rtx pat;
4524 int i;
4525
4526 pat = PATTERN (insn);
4527 if (GET_CODE (pat) == SET || GET_CODE (pat) == CLOBBER)
4528 new_insn_dead_notes (pat, insn, last, orig_insn);
4529 else if (GET_CODE (pat) == PARALLEL)
4530 {
4531 for (i = 0; i < XVECLEN (pat, 0); i++)
4532 if (GET_CODE (XVECEXP (pat, 0, i)) == SET
4533 || GET_CODE (XVECEXP (pat, 0, i)) == CLOBBER)
4534 new_insn_dead_notes (XVECEXP (pat, 0, i), insn, last, orig_insn);
4535 }
4536 }
4537
4538 /* If any insn, except the last, uses the register set by the last insn,
4539 then we need a new REG_DEAD note on that insn. In this case, there
4540 would not have been a REG_DEAD note for this register in the original
4541 insn because it was used and set within one insn.
4542
4543 There is no new REG_DEAD note needed if the last insn uses the register
4544 that it is setting. */
4545
4546 set = single_set (last);
4547 if (set)
4548 {
4549 rtx dest = SET_DEST (set);
4550
4551 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4552 || GET_CODE (dest) == STRICT_LOW_PART
4553 || GET_CODE (dest) == SIGN_EXTRACT)
4554 dest = XEXP (dest, 0);
4555
4556 if (GET_CODE (dest) == REG
4557 && ! reg_overlap_mentioned_p (dest, SET_SRC (set)))
4558 {
4559 for (insn = PREV_INSN (last); ; insn = PREV_INSN (insn))
4560 {
4561 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4562 && reg_mentioned_p (dest, PATTERN (insn))
4563 && (set = single_set (insn)))
4564 {
4565 rtx insn_dest = SET_DEST (set);
4566
4567 while (GET_CODE (insn_dest) == ZERO_EXTRACT
4568 || GET_CODE (insn_dest) == SUBREG
4569 || GET_CODE (insn_dest) == STRICT_LOW_PART
4570 || GET_CODE (insn_dest) == SIGN_EXTRACT)
4571 insn_dest = XEXP (insn_dest, 0);
4572
4573 if (insn_dest != dest)
4574 {
4575 note = rtx_alloc (EXPR_LIST);
4576 PUT_REG_NOTE_KIND (note, REG_DEAD);
4577 XEXP (note, 0) = dest;
4578 XEXP (note, 1) = REG_NOTES (insn);
4579 REG_NOTES (insn) = note;
4580 /* The reg only dies in one insn, the last one
4581 that uses it. */
4582 break;
4583 }
4584 }
4585 if (insn == first)
4586 break;
4587 }
4588 }
4589 }
4590
4591 /* If the original dest is modifying a multiple register target, and the
4592 original instruction was split such that the original dest is now set
4593 by two or more SUBREG sets, then the split insns no longer kill the
4594 destination of the original insn.
4595
4596 In this case, if there exists an instruction in the same basic block,
4597 before the split insn, which uses the original dest, and this use is
4598 killed by the original insn, then we must remove the REG_DEAD note on
4599 this insn, because it is now superfluous.
4600
4601 This does not apply when a hard register gets split, because the code
4602 knows how to handle overlapping hard registers properly. */
4603 if (orig_dest && GET_CODE (orig_dest) == REG)
4604 {
4605 int found_orig_dest = 0;
4606 int found_split_dest = 0;
4607
4608 for (insn = first; ; insn = NEXT_INSN (insn))
4609 {
4610 set = single_set (insn);
4611 if (set)
4612 {
4613 if (GET_CODE (SET_DEST (set)) == REG
4614 && REGNO (SET_DEST (set)) == REGNO (orig_dest))
4615 {
4616 found_orig_dest = 1;
4617 break;
4618 }
4619 else if (GET_CODE (SET_DEST (set)) == SUBREG
4620 && SUBREG_REG (SET_DEST (set)) == orig_dest)
4621 {
4622 found_split_dest = 1;
4623 break;
4624 }
4625 }
4626
4627 if (insn == last)
4628 break;
4629 }
4630
4631 if (found_split_dest)
4632 {
4633 /* Search backwards from FIRST, looking for the first insn that uses
4634 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
4635 If we find an insn, and it has a REG_DEAD note, then delete the
4636 note. */
4637
4638 for (insn = first; insn; insn = PREV_INSN (insn))
4639 {
4640 if (GET_CODE (insn) == CODE_LABEL
4641 || GET_CODE (insn) == JUMP_INSN)
4642 break;
4643 else if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4644 && reg_mentioned_p (orig_dest, insn))
4645 {
4646 note = find_regno_note (insn, REG_DEAD, REGNO (orig_dest));
4647 if (note)
4648 remove_note (insn, note);
4649 }
4650 }
4651 }
4652 else if (! found_orig_dest)
4653 {
4654 /* This should never happen. */
4655 abort ();
4656 }
4657 }
4658
4659 /* Update reg_n_sets. This is necessary to prevent local alloc from
4660 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
4661 a reg from set once to set multiple times. */
4662
4663 {
4664 rtx x = PATTERN (orig_insn);
4665 RTX_CODE code = GET_CODE (x);
4666
4667 if (code == SET || code == CLOBBER)
4668 update_n_sets (x, -1);
4669 else if (code == PARALLEL)
4670 {
4671 int i;
4672 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4673 {
4674 code = GET_CODE (XVECEXP (x, 0, i));
4675 if (code == SET || code == CLOBBER)
4676 update_n_sets (XVECEXP (x, 0, i), -1);
4677 }
4678 }
4679
4680 for (insn = first; ; insn = NEXT_INSN (insn))
4681 {
4682 x = PATTERN (insn);
4683 code = GET_CODE (x);
4684
4685 if (code == SET || code == CLOBBER)
4686 update_n_sets (x, 1);
4687 else if (code == PARALLEL)
4688 {
4689 int i;
4690 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4691 {
4692 code = GET_CODE (XVECEXP (x, 0, i));
4693 if (code == SET || code == CLOBBER)
4694 update_n_sets (XVECEXP (x, 0, i), 1);
4695 }
4696 }
4697
4698 if (insn == last)
4699 break;
4700 }
4701 }
4702 }
4703
4704 /* The one entry point in this file. DUMP_FILE is the dump file for
4705 this pass. */
4706
4707 void
4708 schedule_insns (dump_file)
4709 FILE *dump_file;
4710 {
4711 int max_uid = MAX_INSNS_PER_SPLIT * (get_max_uid () + 1);
4712 int b;
4713 rtx insn;
4714
4715 /* Taking care of this degenerate case makes the rest of
4716 this code simpler. */
4717 if (n_basic_blocks == 0)
4718 return;
4719
4720 /* Create an insn here so that we can hang dependencies off of it later. */
4721 sched_before_next_call
4722 = gen_rtx (INSN, VOIDmode, 0, NULL_RTX, NULL_RTX,
4723 NULL_RTX, 0, NULL_RTX, 0);
4724
4725 /* Initialize the unused_*_lists. We can't use the ones left over from
4726 the previous function, because gcc has freed that memory. We can use
4727 the ones left over from the first sched pass in the second pass however,
4728 so only clear them on the first sched pass. The first pass is before
4729 reload if flag_schedule_insns is set, otherwise it is afterwards. */
4730
4731 if (reload_completed == 0 || ! flag_schedule_insns)
4732 {
4733 unused_insn_list = 0;
4734 unused_expr_list = 0;
4735 }
4736
4737 /* We create no insns here, only reorder them, so we
4738 remember how far we can cut back the stack on exit. */
4739
4740 /* Allocate data for this pass. See comments, above,
4741 for what these vectors do. */
4742 insn_luid = (int *) alloca (max_uid * sizeof (int));
4743 insn_priority = (int *) alloca (max_uid * sizeof (int));
4744 insn_tick = (int *) alloca (max_uid * sizeof (int));
4745 insn_costs = (short *) alloca (max_uid * sizeof (short));
4746 insn_units = (short *) alloca (max_uid * sizeof (short));
4747 insn_blockage = (unsigned int *) alloca (max_uid * sizeof (unsigned int));
4748 insn_ref_count = (int *) alloca (max_uid * sizeof (int));
4749
4750 if (reload_completed == 0)
4751 {
4752 sched_reg_n_deaths = (short *) alloca (max_regno * sizeof (short));
4753 sched_reg_n_calls_crossed = (int *) alloca (max_regno * sizeof (int));
4754 sched_reg_live_length = (int *) alloca (max_regno * sizeof (int));
4755 bb_dead_regs = (regset) alloca (regset_bytes);
4756 bb_live_regs = (regset) alloca (regset_bytes);
4757 bzero ((char *) sched_reg_n_calls_crossed, max_regno * sizeof (int));
4758 bzero ((char *) sched_reg_live_length, max_regno * sizeof (int));
4759 bcopy ((char *) reg_n_deaths, (char *) sched_reg_n_deaths,
4760 max_regno * sizeof (short));
4761 init_alias_analysis ();
4762 }
4763 else
4764 {
4765 sched_reg_n_deaths = 0;
4766 sched_reg_n_calls_crossed = 0;
4767 sched_reg_live_length = 0;
4768 bb_dead_regs = 0;
4769 bb_live_regs = 0;
4770 if (! flag_schedule_insns)
4771 init_alias_analysis ();
4772 }
4773
4774 if (write_symbols != NO_DEBUG)
4775 {
4776 rtx line;
4777
4778 line_note = (rtx *) alloca (max_uid * sizeof (rtx));
4779 bzero ((char *) line_note, max_uid * sizeof (rtx));
4780 line_note_head = (rtx *) alloca (n_basic_blocks * sizeof (rtx));
4781 bzero ((char *) line_note_head, n_basic_blocks * sizeof (rtx));
4782
4783 /* Determine the line-number at the start of each basic block.
4784 This must be computed and saved now, because after a basic block's
4785 predecessor has been scheduled, it is impossible to accurately
4786 determine the correct line number for the first insn of the block. */
4787
4788 for (b = 0; b < n_basic_blocks; b++)
4789 for (line = basic_block_head[b]; line; line = PREV_INSN (line))
4790 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4791 {
4792 line_note_head[b] = line;
4793 break;
4794 }
4795 }
4796
4797 bzero ((char *) insn_luid, max_uid * sizeof (int));
4798 bzero ((char *) insn_priority, max_uid * sizeof (int));
4799 bzero ((char *) insn_tick, max_uid * sizeof (int));
4800 bzero ((char *) insn_costs, max_uid * sizeof (short));
4801 bzero ((char *) insn_units, max_uid * sizeof (short));
4802 bzero ((char *) insn_blockage, max_uid * sizeof (unsigned int));
4803 bzero ((char *) insn_ref_count, max_uid * sizeof (int));
4804
4805 /* Schedule each basic block, block by block. */
4806
4807 /* ??? Add a NOTE after the last insn of the last basic block. It is not
4808 known why this is done. */
4809 /* ??? Perhaps it's done to ensure NEXT_TAIL in schedule_block is a
4810 valid insn. */
4811
4812 insn = basic_block_end[n_basic_blocks-1];
4813 if (NEXT_INSN (insn) == 0
4814 || (GET_CODE (insn) != NOTE
4815 && GET_CODE (insn) != CODE_LABEL
4816 /* Don't emit a NOTE if it would end up between an unconditional
4817 jump and a BARRIER. */
4818 && ! (GET_CODE (insn) == JUMP_INSN
4819 && GET_CODE (NEXT_INSN (insn)) == BARRIER)))
4820 emit_note_after (NOTE_INSN_DELETED, basic_block_end[n_basic_blocks-1]);
4821
4822 for (b = 0; b < n_basic_blocks; b++)
4823 {
4824 rtx insn, next;
4825
4826 note_list = 0;
4827
4828 for (insn = basic_block_head[b]; ; insn = next)
4829 {
4830 rtx prev;
4831 rtx set;
4832
4833 /* Can't use `next_real_insn' because that
4834 might go across CODE_LABELS and short-out basic blocks. */
4835 next = NEXT_INSN (insn);
4836 if (GET_CODE (insn) != INSN)
4837 {
4838 if (insn == basic_block_end[b])
4839 break;
4840
4841 continue;
4842 }
4843
4844 /* Don't split no-op move insns. These should silently disappear
4845 later in final. Splitting such insns would break the code
4846 that handles REG_NO_CONFLICT blocks. */
4847 set = single_set (insn);
4848 if (set && rtx_equal_p (SET_SRC (set), SET_DEST (set)))
4849 {
4850 if (insn == basic_block_end[b])
4851 break;
4852
4853 /* Nops get in the way while scheduling, so delete them now if
4854 register allocation has already been done. It is too risky
4855 to try to do this before register allocation, and there are
4856 unlikely to be very many nops then anyways. */
4857 if (reload_completed)
4858 {
4859 PUT_CODE (insn, NOTE);
4860 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4861 NOTE_SOURCE_FILE (insn) = 0;
4862 }
4863
4864 continue;
4865 }
4866
4867 /* Split insns here to get max fine-grain parallelism. */
4868 prev = PREV_INSN (insn);
4869 if (reload_completed == 0)
4870 {
4871 rtx last, first = PREV_INSN (insn);
4872 rtx notes = REG_NOTES (insn);
4873
4874 last = try_split (PATTERN (insn), insn, 1);
4875 if (last != insn)
4876 {
4877 /* try_split returns the NOTE that INSN became. */
4878 first = NEXT_INSN (first);
4879 update_flow_info (notes, first, last, insn);
4880
4881 PUT_CODE (insn, NOTE);
4882 NOTE_SOURCE_FILE (insn) = 0;
4883 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4884 if (insn == basic_block_head[b])
4885 basic_block_head[b] = first;
4886 if (insn == basic_block_end[b])
4887 {
4888 basic_block_end[b] = last;
4889 break;
4890 }
4891 }
4892 }
4893
4894 if (insn == basic_block_end[b])
4895 break;
4896 }
4897
4898 schedule_block (b, dump_file);
4899
4900 #ifdef USE_C_ALLOCA
4901 alloca (0);
4902 #endif
4903 }
4904
4905 /* Reposition the prologue and epilogue notes in case we moved the
4906 prologue/epilogue insns. */
4907 if (reload_completed)
4908 reposition_prologue_and_epilogue_notes (get_insns ());
4909
4910 if (write_symbols != NO_DEBUG)
4911 {
4912 rtx line = 0;
4913 rtx insn = get_insns ();
4914 int active_insn = 0;
4915 int notes = 0;
4916
4917 /* Walk the insns deleting redundant line-number notes. Many of these
4918 are already present. The remainder tend to occur at basic
4919 block boundaries. */
4920 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
4921 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4922 {
4923 /* If there are no active insns following, INSN is redundant. */
4924 if (active_insn == 0)
4925 {
4926 notes++;
4927 NOTE_SOURCE_FILE (insn) = 0;
4928 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4929 }
4930 /* If the line number is unchanged, LINE is redundant. */
4931 else if (line
4932 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
4933 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn))
4934 {
4935 notes++;
4936 NOTE_SOURCE_FILE (line) = 0;
4937 NOTE_LINE_NUMBER (line) = NOTE_INSN_DELETED;
4938 line = insn;
4939 }
4940 else
4941 line = insn;
4942 active_insn = 0;
4943 }
4944 else if (! ((GET_CODE (insn) == NOTE
4945 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
4946 || (GET_CODE (insn) == INSN
4947 && (GET_CODE (PATTERN (insn)) == USE
4948 || GET_CODE (PATTERN (insn)) == CLOBBER))))
4949 active_insn++;
4950
4951 if (dump_file && notes)
4952 fprintf (dump_file, ";; deleted %d line-number notes\n", notes);
4953 }
4954
4955 if (reload_completed == 0)
4956 {
4957 int regno;
4958 for (regno = 0; regno < max_regno; regno++)
4959 if (sched_reg_live_length[regno])
4960 {
4961 if (dump_file)
4962 {
4963 if (reg_live_length[regno] > sched_reg_live_length[regno])
4964 fprintf (dump_file,
4965 ";; register %d life shortened from %d to %d\n",
4966 regno, reg_live_length[regno],
4967 sched_reg_live_length[regno]);
4968 /* Negative values are special; don't overwrite the current
4969 reg_live_length value if it is negative. */
4970 else if (reg_live_length[regno] < sched_reg_live_length[regno]
4971 && reg_live_length[regno] >= 0)
4972 fprintf (dump_file,
4973 ";; register %d life extended from %d to %d\n",
4974 regno, reg_live_length[regno],
4975 sched_reg_live_length[regno]);
4976
4977 if (! reg_n_calls_crossed[regno]
4978 && sched_reg_n_calls_crossed[regno])
4979 fprintf (dump_file,
4980 ";; register %d now crosses calls\n", regno);
4981 else if (reg_n_calls_crossed[regno]
4982 && ! sched_reg_n_calls_crossed[regno]
4983 && reg_basic_block[regno] != REG_BLOCK_GLOBAL)
4984 fprintf (dump_file,
4985 ";; register %d no longer crosses calls\n", regno);
4986
4987 }
4988 /* Negative values are special; don't overwrite the current
4989 reg_live_length value if it is negative. */
4990 if (reg_live_length[regno] >= 0)
4991 reg_live_length[regno] = sched_reg_live_length[regno];
4992
4993 /* We can't change the value of reg_n_calls_crossed to zero for
4994 pseudos which are live in more than one block.
4995
4996 This is because combine might have made an optimization which
4997 invalidated basic_block_live_at_start and reg_n_calls_crossed,
4998 but it does not update them. If we update reg_n_calls_crossed
4999 here, the two variables are now inconsistent, and this might
5000 confuse the caller-save code into saving a register that doesn't
5001 need to be saved. This is only a problem when we zero calls
5002 crossed for a pseudo live in multiple basic blocks.
5003
5004 Alternatively, we could try to correctly update basic block live
5005 at start here in sched, but that seems complicated. */
5006 if (sched_reg_n_calls_crossed[regno]
5007 || reg_basic_block[regno] != REG_BLOCK_GLOBAL)
5008 reg_n_calls_crossed[regno] = sched_reg_n_calls_crossed[regno];
5009 }
5010 }
5011 }
5012 #endif /* INSN_SCHEDULING */
This page took 0.254941 seconds and 5 git commands to generate.