1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com)
4 Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com)
6 This file is part of GNU CC.
8 GNU CC is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
13 GNU CC is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to
20 the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. */
22 /* Instruction scheduling pass.
24 This pass implements list scheduling within basic blocks. It is
25 run after flow analysis, but before register allocation. The
26 scheduler works as follows:
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning
39 values to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
57 Function unit conflicts are resolved during reverse list scheduling
58 by tracking the time when each insn is committed to the schedule
59 and from that, the time the function units it uses must be free.
60 As insns on the ready list are considered for scheduling, those
61 that would result in a blockage of the already committed insns are
62 queued until no blockage will result. Among the remaining insns on
63 the ready list to be considered, the first one with the largest
64 potential for causing a subsequent blockage is chosen.
66 The following list shows the order in which we want to break ties
67 among insns in the ready list:
69 1. choose insn with lowest conflict cost, ties broken by
70 2. choose insn with the longest path to end of bb, ties broken by
71 3. choose insn that kills the most registers, ties broken by
72 4. choose insn that conflicts with the most ready insns, or finally
73 5. choose insn with lowest UID.
75 Memory references complicate matters. Only if we can be certain
76 that memory references are not part of the data dependency graph
77 (via true, anti, or output dependence), can we move operations past
78 memory references. To first approximation, reads can be done
79 independently, while writes introduce dependencies. Better
80 approximations will yield fewer dependencies.
82 Dependencies set up by memory references are treated in exactly the
83 same way as other dependencies, by using LOG_LINKS.
85 Having optimized the critical path, we may have also unduly
86 extended the lifetimes of some registers. If an operation requires
87 that constants be loaded into registers, it is certainly desirable
88 to load those constants as early as necessary, but no earlier.
89 I.e., it will not do to load up a bunch of registers at the
90 beginning of a basic block only to use them at the end, if they
91 could be loaded later, since this may result in excessive register
94 Note that since branches are never in basic blocks, but only end
95 basic blocks, this pass will not do any branch scheduling. But
96 that is ok, since we can use GNU's delayed branch scheduling
97 pass to take care of this case.
99 Also note that no further optimizations based on algebraic identities
100 are performed, so this pass would be a good one to perform instruction
101 splitting, such as breaking up a multiply instruction into shifts
102 and adds where that is profitable.
104 Given the memory aliasing analysis that this pass should perform,
105 it should be possible to remove redundant stores to memory, and to
106 load values from registers instead of hitting memory.
108 This pass must update information that subsequent passes expect to be
109 correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
110 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
113 The information in the line number notes is carefully retained by this
114 pass. All other NOTE insns are grouped in their same relative order at
115 the beginning of basic blocks that have been scheduled. */
120 #include "basic-block.h"
122 #include "hard-reg-set.h"
124 #include "insn-config.h"
125 #include "insn-attr.h"
127 #ifdef INSN_SCHEDULING
128 /* Arrays set up by scheduling for the same respective purposes as
129 similar-named arrays set up by flow analysis. We work with these
130 arrays during the scheduling pass so we can compare values against
133 Values of these arrays are copied at the end of this pass into the
134 arrays set up by flow analysis. */
135 static short *sched_reg_n_deaths
;
136 static int *sched_reg_n_calls_crossed
;
137 static int *sched_reg_live_length
;
139 /* Element N is the next insn that sets (hard or pseudo) register
140 N within the current basic block; or zero, if there is no
141 such insn. Needed for new registers which may be introduced
142 by splitting insns. */
143 static rtx
*reg_last_uses
;
144 static rtx
*reg_last_sets
;
146 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
147 static int *insn_luid
;
148 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
150 /* Vector indexed by INSN_UID giving each instruction a priority. */
151 static int *insn_priority
;
152 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
154 static short *insn_costs
;
155 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
157 /* Vector indexed by INSN_UID giving an encoding of the function units
159 static short *insn_units
;
160 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
162 /* Vector indexed by INSN_UID giving an encoding of the blockage range
163 function. The unit and the range are encoded. */
164 static unsigned int *insn_blockage
;
165 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
167 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
168 #define ENCODE_BLOCKAGE(U,R) \
169 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
170 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
171 | MAX_BLOCKAGE_COST (R))
172 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
173 #define BLOCKAGE_RANGE(B) \
174 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
175 | (B) & BLOCKAGE_MASK)
177 /* Encodings of the `<name>_unit_blockage_range' function. */
178 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
179 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
181 #define DONE_PRIORITY -1
182 #define MAX_PRIORITY 0x7fffffff
183 #define TAIL_PRIORITY 0x7ffffffe
184 #define LAUNCH_PRIORITY 0x7f000001
185 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
186 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
188 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
189 static int *insn_ref_count
;
190 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
192 /* Vector indexed by INSN_UID giving line-number note in effect for each
193 insn. For line-number notes, this indicates whether the note may be
195 static rtx
*line_note
;
196 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
198 /* Vector indexed by basic block number giving the starting line-number
199 for each basic block. */
200 static rtx
*line_note_head
;
202 /* List of important notes we must keep around. This is a pointer to the
203 last element in the list. */
204 static rtx note_list
;
206 /* Regsets telling whether a given register is live or dead before the last
207 scheduled insn. Must scan the instructions once before scheduling to
208 determine what registers are live or dead at the end of the block. */
209 static regset bb_dead_regs
;
210 static regset bb_live_regs
;
212 /* Regset telling whether a given register is live after the insn currently
213 being scheduled. Before processing an insn, this is equal to bb_live_regs
214 above. This is used so that we can find registers that are newly born/dead
215 after processing an insn. */
216 static regset old_live_regs
;
218 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
219 during the initial scan and reused later. If there are not exactly as
220 many REG_DEAD notes in the post scheduled code as there were in the
221 prescheduled code then we trigger an abort because this indicates a bug. */
222 static rtx dead_notes
;
226 /* An instruction is ready to be scheduled when all insns following it
227 have already been scheduled. It is important to ensure that all
228 insns which use its result will not be executed until its result
229 has been computed. An insn is maintained in one of four structures:
231 (P) the "Pending" set of insns which cannot be scheduled until
232 their dependencies have been satisfied.
233 (Q) the "Queued" set of insns that can be scheduled when sufficient
235 (R) the "Ready" list of unscheduled, uncommitted insns.
236 (S) the "Scheduled" list of insns.
238 Initially, all insns are either "Pending" or "Ready" depending on
239 whether their dependencies are satisfied.
241 Insns move from the "Ready" list to the "Scheduled" list as they
242 are committed to the schedule. As this occurs, the insns in the
243 "Pending" list have their dependencies satisfied and move to either
244 the "Ready" list or the "Queued" set depending on whether
245 sufficient time has passed to make them ready. As time passes,
246 insns move from the "Queued" set to the "Ready" list. Insns may
247 move from the "Ready" list to the "Queued" set if they are blocked
248 due to a function unit conflict.
250 The "Pending" list (P) are the insns in the LOG_LINKS of the unscheduled
251 insns, i.e., those that are ready, queued, and pending.
252 The "Queued" set (Q) is implemented by the variable `insn_queue'.
253 The "Ready" list (R) is implemented by the variables `ready' and
255 The "Scheduled" list (S) is the new insn chain built by this pass.
257 The transition (R->S) is implemented in the scheduling loop in
258 `schedule_block' when the best insn to schedule is chosen.
259 The transition (R->Q) is implemented in `schedule_select' when an
260 insn is found to to have a function unit conflict with the already
262 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
263 insns move from the ready list to the scheduled list.
264 The transition (Q->R) is implemented at the top of the scheduling
265 loop in `schedule_block' as time passes or stalls are introduced. */
267 /* Implement a circular buffer to delay instructions until sufficient
268 time has passed. INSN_QUEUE_SIZE is a power of two larger than
269 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
270 longest time an isnsn may be queued. */
271 static rtx insn_queue
[INSN_QUEUE_SIZE
];
272 static int q_ptr
= 0;
273 static int q_size
= 0;
274 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
275 #define NEXT_Q_AFTER(X,C) (((X)+C) & (INSN_QUEUE_SIZE-1))
277 /* Vector indexed by INSN_UID giving the minimum clock tick at which
278 the insn becomes ready. This is used to note timing constraints for
279 insns in the pending list. */
280 static int *insn_tick
;
281 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
283 /* Data structure for keeping track of register information
284 during that register's life. */
288 short offset
; short bit
;
289 short live_length
; short calls_crossed
;
292 /* Forward declarations. */
293 static rtx canon_rtx
PROTO((rtx
));
294 static int rtx_equal_for_memref_p
PROTO((rtx
, rtx
));
295 static rtx find_symbolic_term
PROTO((rtx
));
296 static int memrefs_conflict_p
PROTO((int, rtx
, int, rtx
,
298 static void add_dependence
PROTO((rtx
, rtx
, enum reg_note
));
299 static void remove_dependence
PROTO((rtx
, rtx
));
300 static rtx find_insn_list
PROTO((rtx
, rtx
));
301 static int insn_unit
PROTO((rtx
));
302 static unsigned int blockage_range
PROTO((int, rtx
));
303 static void clear_units
PROTO((void));
304 static void prepare_unit
PROTO((int));
305 static int actual_hazard_this_instance
PROTO((int, int, rtx
, int, int));
306 static void schedule_unit
PROTO((int, rtx
, int));
307 static int actual_hazard
PROTO((int, rtx
, int, int));
308 static int potential_hazard
PROTO((int, rtx
, int));
309 static int insn_cost
PROTO((rtx
, rtx
, rtx
));
310 static int priority
PROTO((rtx
));
311 static void free_pending_lists
PROTO((void));
312 static void add_insn_mem_dependence
PROTO((rtx
*, rtx
*, rtx
, rtx
));
313 static void flush_pending_lists
PROTO((rtx
));
314 static void sched_analyze_1
PROTO((rtx
, rtx
));
315 static void sched_analyze_2
PROTO((rtx
, rtx
));
316 static void sched_analyze_insn
PROTO((rtx
, rtx
));
317 static int sched_analyze
PROTO((rtx
, rtx
));
318 static void sched_note_set
PROTO((int, rtx
, int));
319 static int rank_for_schedule
PROTO((rtx
*, rtx
*));
320 static void swap_sort
PROTO((rtx
*, int));
321 static void queue_insn
PROTO((rtx
, int));
322 static int birthing_insn
PROTO((rtx
));
323 static void adjust_priority
PROTO((rtx
));
324 static int schedule_insn
PROTO((rtx
, rtx
*, int, int));
325 static int schedule_select
PROTO((rtx
*, int, int, FILE *));
326 static void create_reg_dead_note
PROTO((rtx
, rtx
));
327 static void attach_deaths
PROTO((rtx
, rtx
, int));
328 static void attach_deaths_insn
PROTO((rtx
));
329 static rtx unlink_notes
PROTO((rtx
, rtx
));
330 static int new_sometimes_live
PROTO((struct sometimes
*, int, int,
332 static void finish_sometimes_live
PROTO((struct sometimes
*, int));
333 static void schedule_block
PROTO((int, FILE *));
334 static rtx regno_use_in
PROTO((int, rtx
));
335 static void split_hard_reg_notes
PROTO((rtx
, rtx
, rtx
, rtx
));
336 static void new_insn_dead_notes
PROTO((rtx
, rtx
, rtx
, rtx
));
337 static void update_n_sets
PROTO((rtx
, int));
338 static void update_flow_info
PROTO((rtx
, rtx
, rtx
, rtx
));
340 /* Main entry point of this file. */
341 void schedule_insns
PROTO((FILE *));
343 #endif /* INSN_SCHEDULING */
345 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
347 /* Vector indexed by N giving the initial (unchanging) value known
348 for pseudo-register N. */
349 static rtx
*reg_known_value
;
351 /* Vector recording for each reg_known_value whether it is due to a
352 REG_EQUIV note. Future passes (viz., reload) may replace the
353 pseudo with the equivalent expression and so we account for the
354 dependences that would be introduced if that happens. */
355 /* ??? This is a problem only on the Convex. The REG_EQUIV notes created in
356 assign_parms mention the arg pointer, and there are explicit insns in the
357 RTL that modify the arg pointer. Thus we must ensure that such insns don't
358 get scheduled across each other because that would invalidate the REG_EQUIV
359 notes. One could argue that the REG_EQUIV notes are wrong, but solving
360 the problem in the scheduler will likely give better code, so we do it
362 static char *reg_known_equiv_p
;
364 /* Indicates number of valid entries in reg_known_value. */
365 static int reg_known_value_size
;
371 if (GET_CODE (x
) == REG
&& REGNO (x
) >= FIRST_PSEUDO_REGISTER
372 && REGNO (x
) <= reg_known_value_size
)
373 return reg_known_value
[REGNO (x
)];
374 else if (GET_CODE (x
) == PLUS
)
376 rtx x0
= canon_rtx (XEXP (x
, 0));
377 rtx x1
= canon_rtx (XEXP (x
, 1));
379 if (x0
!= XEXP (x
, 0) || x1
!= XEXP (x
, 1))
381 /* We can tolerate LO_SUMs being offset here; these
382 rtl are used for nothing other than comparisons. */
383 if (GET_CODE (x0
) == CONST_INT
)
384 return plus_constant_for_output (x1
, INTVAL (x0
));
385 else if (GET_CODE (x1
) == CONST_INT
)
386 return plus_constant_for_output (x0
, INTVAL (x1
));
387 return gen_rtx (PLUS
, GET_MODE (x
), x0
, x1
);
393 /* Set up all info needed to perform alias analysis on memory references. */
396 init_alias_analysis ()
398 int maxreg
= max_reg_num ();
403 reg_known_value_size
= maxreg
;
406 = (rtx
*) oballoc ((maxreg
-FIRST_PSEUDO_REGISTER
) * sizeof (rtx
))
407 - FIRST_PSEUDO_REGISTER
;
408 bzero (reg_known_value
+FIRST_PSEUDO_REGISTER
,
409 (maxreg
-FIRST_PSEUDO_REGISTER
) * sizeof (rtx
));
412 = (char *) oballoc ((maxreg
-FIRST_PSEUDO_REGISTER
) * sizeof (char))
413 - FIRST_PSEUDO_REGISTER
;
414 bzero (reg_known_equiv_p
+FIRST_PSEUDO_REGISTER
,
415 (maxreg
-FIRST_PSEUDO_REGISTER
) * sizeof (char));
417 /* Fill in the entries with known constant values. */
418 for (insn
= get_insns (); insn
; insn
= NEXT_INSN (insn
))
419 if ((set
= single_set (insn
)) != 0
420 && GET_CODE (SET_DEST (set
)) == REG
421 && REGNO (SET_DEST (set
)) >= FIRST_PSEUDO_REGISTER
422 && (((note
= find_reg_note (insn
, REG_EQUAL
, 0)) != 0
423 && reg_n_sets
[REGNO (SET_DEST (set
))] == 1)
424 || (note
= find_reg_note (insn
, REG_EQUIV
, NULL_RTX
)) != 0)
425 && GET_CODE (XEXP (note
, 0)) != EXPR_LIST
)
427 int regno
= REGNO (SET_DEST (set
));
428 reg_known_value
[regno
] = XEXP (note
, 0);
429 reg_known_equiv_p
[regno
] = REG_NOTE_KIND (note
) == REG_EQUIV
;
432 /* Fill in the remaining entries. */
433 while (--maxreg
>= FIRST_PSEUDO_REGISTER
)
434 if (reg_known_value
[maxreg
] == 0)
435 reg_known_value
[maxreg
] = regno_reg_rtx
[maxreg
];
438 /* Return 1 if X and Y are identical-looking rtx's.
440 We use the data in reg_known_value above to see if two registers with
441 different numbers are, in fact, equivalent. */
444 rtx_equal_for_memref_p (x
, y
)
449 register enum rtx_code code
;
452 if (x
== 0 && y
== 0)
454 if (x
== 0 || y
== 0)
463 /* Rtx's of different codes cannot be equal. */
464 if (code
!= GET_CODE (y
))
467 /* (MULT:SI x y) and (MULT:HI x y) are NOT equivalent.
468 (REG:SI x) and (REG:HI x) are NOT equivalent. */
470 if (GET_MODE (x
) != GET_MODE (y
))
473 /* REG, LABEL_REF, and SYMBOL_REF can be compared nonrecursively. */
476 return REGNO (x
) == REGNO (y
);
477 if (code
== LABEL_REF
)
478 return XEXP (x
, 0) == XEXP (y
, 0);
479 if (code
== SYMBOL_REF
)
480 return XSTR (x
, 0) == XSTR (y
, 0);
482 /* Compare the elements. If any pair of corresponding elements
483 fail to match, return 0 for the whole things. */
485 fmt
= GET_RTX_FORMAT (code
);
486 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
491 if (XWINT (x
, i
) != XWINT (y
, i
))
497 if (XINT (x
, i
) != XINT (y
, i
))
503 /* Two vectors must have the same length. */
504 if (XVECLEN (x
, i
) != XVECLEN (y
, i
))
507 /* And the corresponding elements must match. */
508 for (j
= 0; j
< XVECLEN (x
, i
); j
++)
509 if (rtx_equal_for_memref_p (XVECEXP (x
, i
, j
), XVECEXP (y
, i
, j
)) == 0)
514 if (rtx_equal_for_memref_p (XEXP (x
, i
), XEXP (y
, i
)) == 0)
520 if (strcmp (XSTR (x
, i
), XSTR (y
, i
)))
525 /* These are just backpointers, so they don't matter. */
531 /* It is believed that rtx's at this level will never
532 contain anything but integers and other rtx's,
533 except for within LABEL_REFs and SYMBOL_REFs. */
541 /* Given an rtx X, find a SYMBOL_REF or LABEL_REF within
542 X and return it, or return 0 if none found. */
545 find_symbolic_term (x
)
549 register enum rtx_code code
;
553 if (code
== SYMBOL_REF
|| code
== LABEL_REF
)
555 if (GET_RTX_CLASS (code
) == 'o')
558 fmt
= GET_RTX_FORMAT (code
);
559 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
565 t
= find_symbolic_term (XEXP (x
, i
));
569 else if (fmt
[i
] == 'E')
575 /* Return nonzero if X and Y (memory addresses) could reference the
576 same location in memory. C is an offset accumulator. When
577 C is nonzero, we are testing aliases between X and Y + C.
578 XSIZE is the size in bytes of the X reference,
579 similarly YSIZE is the size in bytes for Y.
581 If XSIZE or YSIZE is zero, we do not know the amount of memory being
582 referenced (the reference was BLKmode), so make the most pessimistic
585 We recognize the following cases of non-conflicting memory:
587 (1) addresses involving the frame pointer cannot conflict
588 with addresses involving static variables.
589 (2) static variables with different addresses cannot conflict.
591 Nice to notice that varying addresses cannot conflict with fp if no
592 local variables had their addresses taken, but that's too hard now. */
594 /* ??? In Fortran, references to a array parameter can never conflict with
595 another array parameter. */
598 memrefs_conflict_p (xsize
, x
, ysize
, y
, c
)
603 if (GET_CODE (x
) == HIGH
)
605 else if (GET_CODE (x
) == LO_SUM
)
609 if (GET_CODE (y
) == HIGH
)
611 else if (GET_CODE (y
) == LO_SUM
)
616 if (rtx_equal_for_memref_p (x
, y
))
617 return (xsize
== 0 || ysize
== 0 ||
618 (c
>= 0 && xsize
> c
) || (c
< 0 && ysize
+c
> 0));
620 if (y
== frame_pointer_rtx
|| y
== hard_frame_pointer_rtx
621 || y
== stack_pointer_rtx
)
625 y
= x
; ysize
= xsize
;
626 x
= t
; xsize
= tsize
;
629 if (x
== frame_pointer_rtx
|| x
== hard_frame_pointer_rtx
630 || x
== stack_pointer_rtx
)
637 if (GET_CODE (y
) == PLUS
638 && canon_rtx (XEXP (y
, 0)) == x
639 && (y1
= canon_rtx (XEXP (y
, 1)))
640 && GET_CODE (y1
) == CONST_INT
)
643 return (xsize
== 0 || ysize
== 0
644 || (c
>= 0 && xsize
> c
) || (c
< 0 && ysize
+c
> 0));
647 if (GET_CODE (y
) == PLUS
648 && (y1
= canon_rtx (XEXP (y
, 0)))
655 if (GET_CODE (x
) == PLUS
)
657 /* The fact that X is canonicalized means that this
658 PLUS rtx is canonicalized. */
659 rtx x0
= XEXP (x
, 0);
660 rtx x1
= XEXP (x
, 1);
662 if (GET_CODE (y
) == PLUS
)
664 /* The fact that Y is canonicalized means that this
665 PLUS rtx is canonicalized. */
666 rtx y0
= XEXP (y
, 0);
667 rtx y1
= XEXP (y
, 1);
669 if (rtx_equal_for_memref_p (x1
, y1
))
670 return memrefs_conflict_p (xsize
, x0
, ysize
, y0
, c
);
671 if (rtx_equal_for_memref_p (x0
, y0
))
672 return memrefs_conflict_p (xsize
, x1
, ysize
, y1
, c
);
673 if (GET_CODE (x1
) == CONST_INT
)
674 if (GET_CODE (y1
) == CONST_INT
)
675 return memrefs_conflict_p (xsize
, x0
, ysize
, y0
,
676 c
- INTVAL (x1
) + INTVAL (y1
));
678 return memrefs_conflict_p (xsize
, x0
, ysize
, y
, c
- INTVAL (x1
));
679 else if (GET_CODE (y1
) == CONST_INT
)
680 return memrefs_conflict_p (xsize
, x
, ysize
, y0
, c
+ INTVAL (y1
));
682 /* Handle case where we cannot understand iteration operators,
683 but we notice that the base addresses are distinct objects. */
684 x
= find_symbolic_term (x
);
687 y
= find_symbolic_term (y
);
690 return rtx_equal_for_memref_p (x
, y
);
692 else if (GET_CODE (x1
) == CONST_INT
)
693 return memrefs_conflict_p (xsize
, x0
, ysize
, y
, c
- INTVAL (x1
));
695 else if (GET_CODE (y
) == PLUS
)
697 /* The fact that Y is canonicalized means that this
698 PLUS rtx is canonicalized. */
699 rtx y0
= XEXP (y
, 0);
700 rtx y1
= XEXP (y
, 1);
702 if (GET_CODE (y1
) == CONST_INT
)
703 return memrefs_conflict_p (xsize
, x
, ysize
, y0
, c
+ INTVAL (y1
));
708 if (GET_CODE (x
) == GET_CODE (y
))
709 switch (GET_CODE (x
))
713 /* Handle cases where we expect the second operands to be the
714 same, and check only whether the first operand would conflict
717 rtx x1
= canon_rtx (XEXP (x
, 1));
718 rtx y1
= canon_rtx (XEXP (y
, 1));
719 if (! rtx_equal_for_memref_p (x1
, y1
))
721 x0
= canon_rtx (XEXP (x
, 0));
722 y0
= canon_rtx (XEXP (y
, 0));
723 if (rtx_equal_for_memref_p (x0
, y0
))
724 return (xsize
== 0 || ysize
== 0
725 || (c
>= 0 && xsize
> c
) || (c
< 0 && ysize
+c
> 0));
727 /* Can't properly adjust our sizes. */
728 if (GET_CODE (x1
) != CONST_INT
)
730 xsize
/= INTVAL (x1
);
731 ysize
/= INTVAL (x1
);
733 return memrefs_conflict_p (xsize
, x0
, ysize
, y0
, c
);
739 if (GET_CODE (x
) == CONST_INT
&& GET_CODE (y
) == CONST_INT
)
741 c
+= (INTVAL (y
) - INTVAL (x
));
742 return (xsize
== 0 || ysize
== 0
743 || (c
>= 0 && xsize
> c
) || (c
< 0 && ysize
+c
> 0));
746 if (GET_CODE (x
) == CONST
)
748 if (GET_CODE (y
) == CONST
)
749 return memrefs_conflict_p (xsize
, canon_rtx (XEXP (x
, 0)),
750 ysize
, canon_rtx (XEXP (y
, 0)), c
);
752 return memrefs_conflict_p (xsize
, canon_rtx (XEXP (x
, 0)),
755 if (GET_CODE (y
) == CONST
)
756 return memrefs_conflict_p (xsize
, x
, ysize
,
757 canon_rtx (XEXP (y
, 0)), c
);
760 return (rtx_equal_for_memref_p (x
, y
)
761 && (xsize
== 0 || ysize
== 0
762 || (c
>= 0 && xsize
> c
) || (c
< 0 && ysize
+c
> 0)));
769 /* Functions to compute memory dependencies.
771 Since we process the insns in execution order, we can build tables
772 to keep track of what registers are fixed (and not aliased), what registers
773 are varying in known ways, and what registers are varying in unknown
776 If both memory references are volatile, then there must always be a
777 dependence between the two references, since their order can not be
778 changed. A volatile and non-volatile reference can be interchanged
781 A MEM_IN_STRUCT reference at a non-QImode varying address can never
782 conflict with a non-MEM_IN_STRUCT reference at a fixed address. We must
783 allow QImode aliasing because the ANSI C standard allows character
784 pointers to alias anything. We are assuming that characters are
785 always QImode here. */
787 /* Read dependence: X is read after read in MEM takes place. There can
788 only be a dependence here if both reads are volatile. */
791 read_dependence (mem
, x
)
795 return MEM_VOLATILE_P (x
) && MEM_VOLATILE_P (mem
);
798 /* True dependence: X is read after store in MEM takes place. */
801 true_dependence (mem
, x
)
805 /* If X is an unchanging read, then it can't possibly conflict with any
806 non-unchanging store. It may conflict with an unchanging write though,
807 because there may be a single store to this address to initialize it.
808 Just fall through to the code below to resolve the case where we have
809 both an unchanging read and an unchanging write. This won't handle all
810 cases optimally, but the possible performance loss should be
812 if (RTX_UNCHANGING_P (x
) && ! RTX_UNCHANGING_P (mem
))
815 return ((MEM_VOLATILE_P (x
) && MEM_VOLATILE_P (mem
))
816 || (memrefs_conflict_p (SIZE_FOR_MODE (mem
), XEXP (mem
, 0),
817 SIZE_FOR_MODE (x
), XEXP (x
, 0), 0)
818 && ! (MEM_IN_STRUCT_P (mem
) && rtx_addr_varies_p (mem
)
819 && GET_MODE (mem
) != QImode
820 && ! MEM_IN_STRUCT_P (x
) && ! rtx_addr_varies_p (x
))
821 && ! (MEM_IN_STRUCT_P (x
) && rtx_addr_varies_p (x
)
822 && GET_MODE (x
) != QImode
823 && ! MEM_IN_STRUCT_P (mem
) && ! rtx_addr_varies_p (mem
))));
826 /* Anti dependence: X is written after read in MEM takes place. */
829 anti_dependence (mem
, x
)
833 /* If MEM is an unchanging read, then it can't possibly conflict with
834 the store to X, because there is at most one store to MEM, and it must
835 have occured somewhere before MEM. */
836 if (RTX_UNCHANGING_P (mem
))
839 return ((MEM_VOLATILE_P (x
) && MEM_VOLATILE_P (mem
))
840 || (memrefs_conflict_p (SIZE_FOR_MODE (mem
), XEXP (mem
, 0),
841 SIZE_FOR_MODE (x
), XEXP (x
, 0), 0)
842 && ! (MEM_IN_STRUCT_P (mem
) && rtx_addr_varies_p (mem
)
843 && GET_MODE (mem
) != QImode
844 && ! MEM_IN_STRUCT_P (x
) && ! rtx_addr_varies_p (x
))
845 && ! (MEM_IN_STRUCT_P (x
) && rtx_addr_varies_p (x
)
846 && GET_MODE (x
) != QImode
847 && ! MEM_IN_STRUCT_P (mem
) && ! rtx_addr_varies_p (mem
))));
850 /* Output dependence: X is written after store in MEM takes place. */
853 output_dependence (mem
, x
)
857 return ((MEM_VOLATILE_P (x
) && MEM_VOLATILE_P (mem
))
858 || (memrefs_conflict_p (SIZE_FOR_MODE (mem
), XEXP (mem
, 0),
859 SIZE_FOR_MODE (x
), XEXP (x
, 0), 0)
860 && ! (MEM_IN_STRUCT_P (mem
) && rtx_addr_varies_p (mem
)
861 && GET_MODE (mem
) != QImode
862 && ! MEM_IN_STRUCT_P (x
) && ! rtx_addr_varies_p (x
))
863 && ! (MEM_IN_STRUCT_P (x
) && rtx_addr_varies_p (x
)
864 && GET_MODE (x
) != QImode
865 && ! MEM_IN_STRUCT_P (mem
) && ! rtx_addr_varies_p (mem
))));
868 /* Helper functions for instruction scheduling. */
870 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
871 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
872 of dependence that this link represents. */
875 add_dependence (insn
, elem
, dep_type
)
878 enum reg_note dep_type
;
882 /* Don't depend an insn on itself. */
886 /* If elem is part of a sequence that must be scheduled together, then
887 make the dependence point to the last insn of the sequence.
888 When HAVE_cc0, it is possible for NOTEs to exist between users and
889 setters of the condition codes, so we must skip past notes here.
890 Otherwise, NOTEs are impossible here. */
892 next
= NEXT_INSN (elem
);
895 while (next
&& GET_CODE (next
) == NOTE
)
896 next
= NEXT_INSN (next
);
899 if (next
&& SCHED_GROUP_P (next
))
901 /* Notes will never intervene here though, so don't bother checking
903 /* We must reject CODE_LABELs, so that we don't get confused by one
904 that has LABEL_PRESERVE_P set, which is represented by the same
905 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
907 while (NEXT_INSN (next
) && SCHED_GROUP_P (NEXT_INSN (next
))
908 && GET_CODE (NEXT_INSN (next
)) != CODE_LABEL
)
909 next
= NEXT_INSN (next
);
911 /* Again, don't depend an insn on itself. */
915 /* Make the dependence to NEXT, the last insn of the group, instead
916 of the original ELEM. */
920 /* Check that we don't already have this dependence. */
921 for (link
= LOG_LINKS (insn
); link
; link
= XEXP (link
, 1))
922 if (XEXP (link
, 0) == elem
)
924 /* If this is a more restrictive type of dependence than the existing
925 one, then change the existing dependence to this type. */
926 if ((int) dep_type
< (int) REG_NOTE_KIND (link
))
927 PUT_REG_NOTE_KIND (link
, dep_type
);
930 /* Might want to check one level of transitivity to save conses. */
932 link
= rtx_alloc (INSN_LIST
);
933 /* Insn dependency, not data dependency. */
934 PUT_REG_NOTE_KIND (link
, dep_type
);
935 XEXP (link
, 0) = elem
;
936 XEXP (link
, 1) = LOG_LINKS (insn
);
937 LOG_LINKS (insn
) = link
;
940 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
941 of INSN. Abort if not found. */
944 remove_dependence (insn
, elem
)
951 for (prev
= 0, link
= LOG_LINKS (insn
); link
;
952 prev
= link
, link
= XEXP (link
, 1))
954 if (XEXP (link
, 0) == elem
)
957 XEXP (prev
, 1) = XEXP (link
, 1);
959 LOG_LINKS (insn
) = XEXP (link
, 1);
969 #ifndef INSN_SCHEDULING
971 schedule_insns (dump_file
)
980 /* Computation of memory dependencies. */
982 /* The *_insns and *_mems are paired lists. Each pending memory operation
983 will have a pointer to the MEM rtx on one list and a pointer to the
984 containing insn on the other list in the same place in the list. */
986 /* We can't use add_dependence like the old code did, because a single insn
987 may have multiple memory accesses, and hence needs to be on the list
988 once for each memory access. Add_dependence won't let you add an insn
989 to a list more than once. */
991 /* An INSN_LIST containing all insns with pending read operations. */
992 static rtx pending_read_insns
;
994 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
995 static rtx pending_read_mems
;
997 /* An INSN_LIST containing all insns with pending write operations. */
998 static rtx pending_write_insns
;
1000 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
1001 static rtx pending_write_mems
;
1003 /* Indicates the combined length of the two pending lists. We must prevent
1004 these lists from ever growing too large since the number of dependencies
1005 produced is at least O(N*N), and execution time is at least O(4*N*N), as
1006 a function of the length of these pending lists. */
1008 static int pending_lists_length
;
1010 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
1012 static rtx unused_insn_list
;
1014 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
1016 static rtx unused_expr_list
;
1018 /* The last insn upon which all memory references must depend.
1019 This is an insn which flushed the pending lists, creating a dependency
1020 between it and all previously pending memory references. This creates
1021 a barrier (or a checkpoint) which no memory reference is allowed to cross.
1023 This includes all non constant CALL_INSNs. When we do interprocedural
1024 alias analysis, this restriction can be relaxed.
1025 This may also be an INSN that writes memory if the pending lists grow
1028 static rtx last_pending_memory_flush
;
1030 /* The last function call we have seen. All hard regs, and, of course,
1031 the last function call, must depend on this. */
1033 static rtx last_function_call
;
1035 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
1036 that does not already cross a call. We create dependencies between each
1037 of those insn and the next call insn, to ensure that they won't cross a call
1038 after scheduling is done. */
1040 static rtx sched_before_next_call
;
1042 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
1043 so that insns independent of the last scheduled insn will be preferred
1044 over dependent instructions. */
1046 static rtx last_scheduled_insn
;
1048 /* Process an insn's memory dependencies. There are four kinds of
1051 (0) read dependence: read follows read
1052 (1) true dependence: read follows write
1053 (2) anti dependence: write follows read
1054 (3) output dependence: write follows write
1056 We are careful to build only dependencies which actually exist, and
1057 use transitivity to avoid building too many links. */
1059 /* Return the INSN_LIST containing INSN in LIST, or NULL
1060 if LIST does not contain INSN. */
1063 find_insn_list (insn
, list
)
1069 if (XEXP (list
, 0) == insn
)
1071 list
= XEXP (list
, 1);
1076 /* Compute the function units used by INSN. This caches the value
1077 returned by function_units_used. A function unit is encoded as the
1078 unit number if the value is non-negative and the compliment of a
1079 mask if the value is negative. A function unit index is the
1080 non-negative encoding. */
1086 register int unit
= INSN_UNIT (insn
);
1090 recog_memoized (insn
);
1092 /* A USE insn, or something else we don't need to understand.
1093 We can't pass these directly to function_units_used because it will
1094 trigger a fatal error for unrecognizable insns. */
1095 if (INSN_CODE (insn
) < 0)
1099 unit
= function_units_used (insn
);
1100 /* Increment non-negative values so we can cache zero. */
1101 if (unit
>= 0) unit
++;
1103 /* We only cache 16 bits of the result, so if the value is out of
1104 range, don't cache it. */
1105 if (FUNCTION_UNITS_SIZE
< HOST_BITS_PER_SHORT
1107 || (~unit
& ((1 << (HOST_BITS_PER_SHORT
- 1)) - 1)) == 0)
1108 INSN_UNIT (insn
) = unit
;
1110 return (unit
> 0 ? unit
- 1 : unit
);
1113 /* Compute the blockage range for executing INSN on UNIT. This caches
1114 the value returned by the blockage_range_function for the unit.
1115 These values are encoded in an int where the upper half gives the
1116 minimum value and the lower half gives the maximum value. */
1118 __inline
static unsigned int
1119 blockage_range (unit
, insn
)
1123 unsigned int blockage
= INSN_BLOCKAGE (insn
);
1126 if (UNIT_BLOCKED (blockage
) != unit
+ 1)
1128 range
= function_units
[unit
].blockage_range_function (insn
);
1129 /* We only cache the blockage range for one unit and then only if
1131 if (HOST_BITS_PER_INT
>= UNIT_BITS
+ 2 * BLOCKAGE_BITS
)
1132 INSN_BLOCKAGE (insn
) = ENCODE_BLOCKAGE (unit
+ 1, range
);
1135 range
= BLOCKAGE_RANGE (blockage
);
1140 /* A vector indexed by function unit instance giving the last insn to use
1141 the unit. The value of the function unit instance index for unit U
1142 instance I is (U + I * FUNCTION_UNITS_SIZE). */
1143 static rtx unit_last_insn
[FUNCTION_UNITS_SIZE
* MAX_MULTIPLICITY
];
1145 /* A vector indexed by function unit instance giving the minimum time when
1146 the unit will unblock based on the maximum blockage cost. */
1147 static int unit_tick
[FUNCTION_UNITS_SIZE
* MAX_MULTIPLICITY
];
1149 /* A vector indexed by function unit number giving the number of insns
1150 that remain to use the unit. */
1151 static int unit_n_insns
[FUNCTION_UNITS_SIZE
];
1153 /* Reset the function unit state to the null state. */
1160 bzero (unit_last_insn
, sizeof (unit_last_insn
));
1161 bzero (unit_tick
, sizeof (unit_tick
));
1162 bzero (unit_n_insns
, sizeof (unit_n_insns
));
1165 /* Record an insn as one that will use the units encoded by UNIT. */
1167 __inline
static void
1174 unit_n_insns
[unit
]++;
1176 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
1177 if ((unit
& 1) != 0)
1181 /* Return the actual hazard cost of executing INSN on the unit UNIT,
1182 instance INSTANCE at time CLOCK if the previous actual hazard cost
1186 actual_hazard_this_instance (unit
, instance
, insn
, clock
, cost
)
1187 int unit
, instance
, clock
, cost
;
1191 int tick
= unit_tick
[instance
];
1193 if (tick
- clock
> cost
)
1195 /* The scheduler is operating in reverse, so INSN is the executing
1196 insn and the unit's last insn is the candidate insn. We want a
1197 more exact measure of the blockage if we execute INSN at CLOCK
1198 given when we committed the execution of the unit's last insn.
1200 The blockage value is given by either the unit's max blockage
1201 constant, blockage range function, or blockage function. Use
1202 the most exact form for the given unit. */
1204 if (function_units
[unit
].blockage_range_function
)
1206 if (function_units
[unit
].blockage_function
)
1207 tick
+= (function_units
[unit
].blockage_function
1208 (insn
, unit_last_insn
[instance
])
1209 - function_units
[unit
].max_blockage
);
1211 tick
+= ((int) MAX_BLOCKAGE_COST (blockage_range (unit
, insn
))
1212 - function_units
[unit
].max_blockage
);
1214 if (tick
- clock
> cost
)
1215 cost
= tick
- clock
;
1220 /* Record INSN as having begun execution on the units encoded by UNIT at
1223 __inline
static void
1224 schedule_unit (unit
, insn
, clock
)
1232 int instance
= unit
;
1233 #if MAX_MULTIPLICITY > 1
1234 /* Find the first free instance of the function unit and use that
1235 one. We assume that one is free. */
1236 for (i
= function_units
[unit
].multiplicity
- 1; i
> 0; i
--)
1238 if (! actual_hazard_this_instance (unit
, instance
, insn
, clock
, 0))
1240 instance
+= FUNCTION_UNITS_SIZE
;
1243 unit_last_insn
[instance
] = insn
;
1244 unit_tick
[instance
] = (clock
+ function_units
[unit
].max_blockage
);
1247 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
1248 if ((unit
& 1) != 0)
1249 schedule_unit (i
, insn
, clock
);
1252 /* Return the actual hazard cost of executing INSN on the units encoded by
1253 UNIT at time CLOCK if the previous actual hazard cost was COST. */
1256 actual_hazard (unit
, insn
, clock
, cost
)
1257 int unit
, clock
, cost
;
1264 /* Find the instance of the function unit with the minimum hazard. */
1265 int instance
= unit
;
1266 int best
= instance
;
1267 int best_cost
= actual_hazard_this_instance (unit
, instance
, insn
,
1271 #if MAX_MULTIPLICITY > 1
1272 if (best_cost
> cost
)
1274 for (i
= function_units
[unit
].multiplicity
- 1; i
> 0; i
--)
1276 instance
+= FUNCTION_UNITS_SIZE
;
1277 this_cost
= actual_hazard_this_instance (unit
, instance
, insn
,
1279 if (this_cost
< best_cost
)
1282 best_cost
= this_cost
;
1283 if (this_cost
<= cost
)
1289 cost
= MAX (cost
, best_cost
);
1292 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
1293 if ((unit
& 1) != 0)
1294 cost
= actual_hazard (i
, insn
, clock
, cost
);
1299 /* Return the potential hazard cost of executing an instruction on the
1300 units encoded by UNIT if the previous potential hazard cost was COST.
1301 An insn with a large blockage time is chosen in preference to one
1302 with a smaller time; an insn that uses a unit that is more likely
1303 to be used is chosen in preference to one with a unit that is less
1304 used. We are trying to minimize a subsequent actual hazard. */
1307 potential_hazard (unit
, insn
, cost
)
1312 unsigned int minb
, maxb
;
1316 minb
= maxb
= function_units
[unit
].max_blockage
;
1319 if (function_units
[unit
].blockage_range_function
)
1321 maxb
= minb
= blockage_range (unit
, insn
);
1322 maxb
= MAX_BLOCKAGE_COST (maxb
);
1323 minb
= MIN_BLOCKAGE_COST (minb
);
1328 /* Make the number of instructions left dominate. Make the
1329 minimum delay dominate the maximum delay. If all these
1330 are the same, use the unit number to add an arbitrary
1331 ordering. Other terms can be added. */
1332 ncost
= minb
* 0x40 + maxb
;
1333 ncost
*= (unit_n_insns
[unit
] - 1) * 0x1000 + unit
;
1340 for (i
= 0, unit
= ~unit
; unit
; i
++, unit
>>= 1)
1341 if ((unit
& 1) != 0)
1342 cost
= potential_hazard (i
, insn
, cost
);
1347 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
1348 This is the number of virtual cycles taken between instruction issue and
1349 instruction results. */
1352 insn_cost (insn
, link
, used
)
1353 rtx insn
, link
, used
;
1355 register int cost
= INSN_COST (insn
);
1359 recog_memoized (insn
);
1361 /* A USE insn, or something else we don't need to understand.
1362 We can't pass these directly to result_ready_cost because it will
1363 trigger a fatal error for unrecognizable insns. */
1364 if (INSN_CODE (insn
) < 0)
1366 INSN_COST (insn
) = 1;
1371 cost
= result_ready_cost (insn
);
1376 INSN_COST (insn
) = cost
;
1380 /* A USE insn should never require the value used to be computed. This
1381 allows the computation of a function's result and parameter values to
1382 overlap the return and call. */
1383 recog_memoized (used
);
1384 if (INSN_CODE (used
) < 0)
1385 LINK_COST_FREE (link
) = 1;
1387 /* If some dependencies vary the cost, compute the adjustment. Most
1388 commonly, the adjustment is complete: either the cost is ignored
1389 (in the case of an output- or anti-dependence), or the cost is
1390 unchanged. These values are cached in the link as LINK_COST_FREE
1391 and LINK_COST_ZERO. */
1393 if (LINK_COST_FREE (link
))
1396 else if (! LINK_COST_ZERO (link
))
1400 ADJUST_COST (used
, link
, insn
, ncost
);
1402 LINK_COST_FREE (link
) = ncost
= 1;
1404 LINK_COST_ZERO (link
) = 1;
1411 /* Compute the priority number for INSN. */
1417 if (insn
&& GET_RTX_CLASS (GET_CODE (insn
)) == 'i')
1421 int this_priority
= INSN_PRIORITY (insn
);
1424 if (this_priority
> 0)
1425 return this_priority
;
1429 /* Nonzero if these insns must be scheduled together. */
1430 if (SCHED_GROUP_P (insn
))
1433 while (SCHED_GROUP_P (prev
))
1435 prev
= PREV_INSN (prev
);
1436 INSN_REF_COUNT (prev
) += 1;
1440 for (prev
= LOG_LINKS (insn
); prev
; prev
= XEXP (prev
, 1))
1442 rtx x
= XEXP (prev
, 0);
1444 /* A dependence pointing to a note is always obsolete, because
1445 sched_analyze_insn will have created any necessary new dependences
1446 which replace it. Notes can be created when instructions are
1447 deleted by insn splitting, or by register allocation. */
1448 if (GET_CODE (x
) == NOTE
)
1450 remove_dependence (insn
, x
);
1454 /* Clear the link cost adjustment bits. */
1455 LINK_COST_FREE (prev
) = 0;
1457 LINK_COST_ZERO (prev
) = 0;
1460 /* This priority calculation was chosen because it results in the
1461 least instruction movement, and does not hurt the performance
1462 of the resulting code compared to the old algorithm.
1463 This makes the sched algorithm more stable, which results
1464 in better code, because there is less register pressure,
1465 cross jumping is more likely to work, and debugging is easier.
1467 When all instructions have a latency of 1, there is no need to
1468 move any instructions. Subtracting one here ensures that in such
1469 cases all instructions will end up with a priority of one, and
1470 hence no scheduling will be done.
1472 The original code did not subtract the one, and added the
1473 insn_cost of the current instruction to its priority (e.g.
1474 move the insn_cost call down to the end). */
1476 if (REG_NOTE_KIND (prev
) == 0)
1477 /* Data dependence. */
1478 prev_priority
= priority (x
) + insn_cost (x
, prev
, insn
) - 1;
1480 /* Anti or output dependence. Don't add the latency of this
1481 insn's result, because it isn't being used. */
1482 prev_priority
= priority (x
);
1484 if (prev_priority
> max_priority
)
1485 max_priority
= prev_priority
;
1486 INSN_REF_COUNT (x
) += 1;
1489 prepare_unit (insn_unit (insn
));
1490 INSN_PRIORITY (insn
) = max_priority
;
1491 return INSN_PRIORITY (insn
);
1496 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
1497 them to the unused_*_list variables, so that they can be reused. */
1500 free_pending_lists ()
1502 register rtx link
, prev_link
;
1504 if (pending_read_insns
)
1506 prev_link
= pending_read_insns
;
1507 link
= XEXP (prev_link
, 1);
1512 link
= XEXP (link
, 1);
1515 XEXP (prev_link
, 1) = unused_insn_list
;
1516 unused_insn_list
= pending_read_insns
;
1517 pending_read_insns
= 0;
1520 if (pending_write_insns
)
1522 prev_link
= pending_write_insns
;
1523 link
= XEXP (prev_link
, 1);
1528 link
= XEXP (link
, 1);
1531 XEXP (prev_link
, 1) = unused_insn_list
;
1532 unused_insn_list
= pending_write_insns
;
1533 pending_write_insns
= 0;
1536 if (pending_read_mems
)
1538 prev_link
= pending_read_mems
;
1539 link
= XEXP (prev_link
, 1);
1544 link
= XEXP (link
, 1);
1547 XEXP (prev_link
, 1) = unused_expr_list
;
1548 unused_expr_list
= pending_read_mems
;
1549 pending_read_mems
= 0;
1552 if (pending_write_mems
)
1554 prev_link
= pending_write_mems
;
1555 link
= XEXP (prev_link
, 1);
1560 link
= XEXP (link
, 1);
1563 XEXP (prev_link
, 1) = unused_expr_list
;
1564 unused_expr_list
= pending_write_mems
;
1565 pending_write_mems
= 0;
1569 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
1570 The MEM is a memory reference contained within INSN, which we are saving
1571 so that we can do memory aliasing on it. */
1574 add_insn_mem_dependence (insn_list
, mem_list
, insn
, mem
)
1575 rtx
*insn_list
, *mem_list
, insn
, mem
;
1579 if (unused_insn_list
)
1581 link
= unused_insn_list
;
1582 unused_insn_list
= XEXP (link
, 1);
1585 link
= rtx_alloc (INSN_LIST
);
1586 XEXP (link
, 0) = insn
;
1587 XEXP (link
, 1) = *insn_list
;
1590 if (unused_expr_list
)
1592 link
= unused_expr_list
;
1593 unused_expr_list
= XEXP (link
, 1);
1596 link
= rtx_alloc (EXPR_LIST
);
1597 XEXP (link
, 0) = mem
;
1598 XEXP (link
, 1) = *mem_list
;
1601 pending_lists_length
++;
1604 /* Make a dependency between every memory reference on the pending lists
1605 and INSN, thus flushing the pending lists. */
1608 flush_pending_lists (insn
)
1613 while (pending_read_insns
)
1615 add_dependence (insn
, XEXP (pending_read_insns
, 0), REG_DEP_ANTI
);
1617 link
= pending_read_insns
;
1618 pending_read_insns
= XEXP (pending_read_insns
, 1);
1619 XEXP (link
, 1) = unused_insn_list
;
1620 unused_insn_list
= link
;
1622 link
= pending_read_mems
;
1623 pending_read_mems
= XEXP (pending_read_mems
, 1);
1624 XEXP (link
, 1) = unused_expr_list
;
1625 unused_expr_list
= link
;
1627 while (pending_write_insns
)
1629 add_dependence (insn
, XEXP (pending_write_insns
, 0), REG_DEP_ANTI
);
1631 link
= pending_write_insns
;
1632 pending_write_insns
= XEXP (pending_write_insns
, 1);
1633 XEXP (link
, 1) = unused_insn_list
;
1634 unused_insn_list
= link
;
1636 link
= pending_write_mems
;
1637 pending_write_mems
= XEXP (pending_write_mems
, 1);
1638 XEXP (link
, 1) = unused_expr_list
;
1639 unused_expr_list
= link
;
1641 pending_lists_length
= 0;
1643 if (last_pending_memory_flush
)
1644 add_dependence (insn
, last_pending_memory_flush
, REG_DEP_ANTI
);
1646 last_pending_memory_flush
= insn
;
1649 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
1650 by the write to the destination of X, and reads of everything mentioned. */
1653 sched_analyze_1 (x
, insn
)
1658 register rtx dest
= SET_DEST (x
);
1663 while (GET_CODE (dest
) == STRICT_LOW_PART
|| GET_CODE (dest
) == SUBREG
1664 || GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SIGN_EXTRACT
)
1666 if (GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SIGN_EXTRACT
)
1668 /* The second and third arguments are values read by this insn. */
1669 sched_analyze_2 (XEXP (dest
, 1), insn
);
1670 sched_analyze_2 (XEXP (dest
, 2), insn
);
1672 dest
= SUBREG_REG (dest
);
1675 if (GET_CODE (dest
) == REG
)
1677 register int offset
, bit
, i
;
1679 regno
= REGNO (dest
);
1681 /* A hard reg in a wide mode may really be multiple registers.
1682 If so, mark all of them just like the first. */
1683 if (regno
< FIRST_PSEUDO_REGISTER
)
1685 i
= HARD_REGNO_NREGS (regno
, GET_MODE (dest
));
1690 for (u
= reg_last_uses
[regno
+i
]; u
; u
= XEXP (u
, 1))
1691 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
1692 reg_last_uses
[regno
+ i
] = 0;
1693 if (reg_last_sets
[regno
+ i
])
1694 add_dependence (insn
, reg_last_sets
[regno
+ i
],
1696 reg_last_sets
[regno
+ i
] = insn
;
1697 if ((call_used_regs
[i
] || global_regs
[i
])
1698 && last_function_call
)
1699 /* Function calls clobber all call_used regs. */
1700 add_dependence (insn
, last_function_call
, REG_DEP_ANTI
);
1707 for (u
= reg_last_uses
[regno
]; u
; u
= XEXP (u
, 1))
1708 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
1709 reg_last_uses
[regno
] = 0;
1710 if (reg_last_sets
[regno
])
1711 add_dependence (insn
, reg_last_sets
[regno
], REG_DEP_OUTPUT
);
1712 reg_last_sets
[regno
] = insn
;
1714 /* Pseudos that are REG_EQUIV to something may be replaced
1715 by that during reloading. We need only add dependencies for
1716 the address in the REG_EQUIV note. */
1717 if (! reload_completed
1718 && reg_known_equiv_p
[regno
]
1719 && GET_CODE (reg_known_value
[regno
]) == MEM
)
1720 sched_analyze_2 (XEXP (reg_known_value
[regno
], 0), insn
);
1722 /* Don't let it cross a call after scheduling if it doesn't
1723 already cross one. */
1724 if (reg_n_calls_crossed
[regno
] == 0 && last_function_call
)
1725 add_dependence (insn
, last_function_call
, REG_DEP_ANTI
);
1728 else if (GET_CODE (dest
) == MEM
)
1730 /* Writing memory. */
1732 if (pending_lists_length
> 32)
1734 /* Flush all pending reads and writes to prevent the pending lists
1735 from getting any larger. Insn scheduling runs too slowly when
1736 these lists get long. The number 32 was chosen because it
1737 seems like a reasonable number. When compiling GCC with itself,
1738 this flush occurs 8 times for sparc, and 10 times for m88k using
1740 flush_pending_lists (insn
);
1744 rtx pending
, pending_mem
;
1746 pending
= pending_read_insns
;
1747 pending_mem
= pending_read_mems
;
1750 /* If a dependency already exists, don't create a new one. */
1751 if (! find_insn_list (XEXP (pending
, 0), LOG_LINKS (insn
)))
1752 if (anti_dependence (XEXP (pending_mem
, 0), dest
))
1753 add_dependence (insn
, XEXP (pending
, 0), REG_DEP_ANTI
);
1755 pending
= XEXP (pending
, 1);
1756 pending_mem
= XEXP (pending_mem
, 1);
1759 pending
= pending_write_insns
;
1760 pending_mem
= pending_write_mems
;
1763 /* If a dependency already exists, don't create a new one. */
1764 if (! find_insn_list (XEXP (pending
, 0), LOG_LINKS (insn
)))
1765 if (output_dependence (XEXP (pending_mem
, 0), dest
))
1766 add_dependence (insn
, XEXP (pending
, 0), REG_DEP_OUTPUT
);
1768 pending
= XEXP (pending
, 1);
1769 pending_mem
= XEXP (pending_mem
, 1);
1772 if (last_pending_memory_flush
)
1773 add_dependence (insn
, last_pending_memory_flush
, REG_DEP_ANTI
);
1775 add_insn_mem_dependence (&pending_write_insns
, &pending_write_mems
,
1778 sched_analyze_2 (XEXP (dest
, 0), insn
);
1781 /* Analyze reads. */
1782 if (GET_CODE (x
) == SET
)
1783 sched_analyze_2 (SET_SRC (x
), insn
);
1786 /* Analyze the uses of memory and registers in rtx X in INSN. */
1789 sched_analyze_2 (x
, insn
)
1795 register enum rtx_code code
;
1801 code
= GET_CODE (x
);
1810 /* Ignore constants. Note that we must handle CONST_DOUBLE here
1811 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
1812 this does not mean that this insn is using cc0. */
1820 /* There may be a note before this insn now, but all notes will
1821 be removed before we actually try to schedule the insns, so
1822 it won't cause a problem later. We must avoid it here though. */
1824 /* User of CC0 depends on immediately preceding insn. */
1825 SCHED_GROUP_P (insn
) = 1;
1827 /* Make a copy of all dependencies on the immediately previous insn,
1828 and add to this insn. This is so that all the dependencies will
1829 apply to the group. Remove an explicit dependence on this insn
1830 as SCHED_GROUP_P now represents it. */
1832 prev
= PREV_INSN (insn
);
1833 while (GET_CODE (prev
) == NOTE
)
1834 prev
= PREV_INSN (prev
);
1836 if (find_insn_list (prev
, LOG_LINKS (insn
)))
1837 remove_dependence (insn
, prev
);
1839 for (link
= LOG_LINKS (prev
); link
; link
= XEXP (link
, 1))
1840 add_dependence (insn
, XEXP (link
, 0), REG_NOTE_KIND (link
));
1848 int regno
= REGNO (x
);
1849 if (regno
< FIRST_PSEUDO_REGISTER
)
1853 i
= HARD_REGNO_NREGS (regno
, GET_MODE (x
));
1856 reg_last_uses
[regno
+ i
]
1857 = gen_rtx (INSN_LIST
, VOIDmode
,
1858 insn
, reg_last_uses
[regno
+ i
]);
1859 if (reg_last_sets
[regno
+ i
])
1860 add_dependence (insn
, reg_last_sets
[regno
+ i
], 0);
1861 if ((call_used_regs
[regno
+ i
] || global_regs
[regno
+ i
])
1862 && last_function_call
)
1863 /* Function calls clobber all call_used regs. */
1864 add_dependence (insn
, last_function_call
, REG_DEP_ANTI
);
1869 reg_last_uses
[regno
]
1870 = gen_rtx (INSN_LIST
, VOIDmode
, insn
, reg_last_uses
[regno
]);
1871 if (reg_last_sets
[regno
])
1872 add_dependence (insn
, reg_last_sets
[regno
], 0);
1874 /* Pseudos that are REG_EQUIV to something may be replaced
1875 by that during reloading. We need only add dependencies for
1876 the address in the REG_EQUIV note. */
1877 if (! reload_completed
1878 && reg_known_equiv_p
[regno
]
1879 && GET_CODE (reg_known_value
[regno
]) == MEM
)
1880 sched_analyze_2 (XEXP (reg_known_value
[regno
], 0), insn
);
1882 /* If the register does not already cross any calls, then add this
1883 insn to the sched_before_next_call list so that it will still
1884 not cross calls after scheduling. */
1885 if (reg_n_calls_crossed
[regno
] == 0)
1886 add_dependence (sched_before_next_call
, insn
, REG_DEP_ANTI
);
1893 /* Reading memory. */
1895 rtx pending
, pending_mem
;
1897 pending
= pending_read_insns
;
1898 pending_mem
= pending_read_mems
;
1901 /* If a dependency already exists, don't create a new one. */
1902 if (! find_insn_list (XEXP (pending
, 0), LOG_LINKS (insn
)))
1903 if (read_dependence (XEXP (pending_mem
, 0), x
))
1904 add_dependence (insn
, XEXP (pending
, 0), REG_DEP_ANTI
);
1906 pending
= XEXP (pending
, 1);
1907 pending_mem
= XEXP (pending_mem
, 1);
1910 pending
= pending_write_insns
;
1911 pending_mem
= pending_write_mems
;
1914 /* If a dependency already exists, don't create a new one. */
1915 if (! find_insn_list (XEXP (pending
, 0), LOG_LINKS (insn
)))
1916 if (true_dependence (XEXP (pending_mem
, 0), x
))
1917 add_dependence (insn
, XEXP (pending
, 0), 0);
1919 pending
= XEXP (pending
, 1);
1920 pending_mem
= XEXP (pending_mem
, 1);
1922 if (last_pending_memory_flush
)
1923 add_dependence (insn
, last_pending_memory_flush
, REG_DEP_ANTI
);
1925 /* Always add these dependencies to pending_reads, since
1926 this insn may be followed by a write. */
1927 add_insn_mem_dependence (&pending_read_insns
, &pending_read_mems
,
1930 /* Take advantage of tail recursion here. */
1931 sched_analyze_2 (XEXP (x
, 0), insn
);
1937 case UNSPEC_VOLATILE
:
1942 /* Traditional and volatile asm instructions must be considered to use
1943 and clobber all hard registers, all pseudo-registers and all of
1944 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1946 Consider for instance a volatile asm that changes the fpu rounding
1947 mode. An insn should not be moved across this even if it only uses
1948 pseudo-regs because it might give an incorrectly rounded result. */
1949 if (code
!= ASM_OPERANDS
|| MEM_VOLATILE_P (x
))
1951 int max_reg
= max_reg_num ();
1952 for (i
= 0; i
< max_reg
; i
++)
1954 for (u
= reg_last_uses
[i
]; u
; u
= XEXP (u
, 1))
1955 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
1956 reg_last_uses
[i
] = 0;
1957 if (reg_last_sets
[i
])
1958 add_dependence (insn
, reg_last_sets
[i
], 0);
1959 reg_last_sets
[i
] = insn
;
1962 flush_pending_lists (insn
);
1965 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1966 We can not just fall through here since then we would be confused
1967 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1968 traditional asms unlike their normal usage. */
1970 if (code
== ASM_OPERANDS
)
1972 for (j
= 0; j
< ASM_OPERANDS_INPUT_LENGTH (x
); j
++)
1973 sched_analyze_2 (ASM_OPERANDS_INPUT (x
, j
), insn
);
1983 /* These both read and modify the result. We must handle them as writes
1984 to get proper dependencies for following instructions. We must handle
1985 them as reads to get proper dependencies from this to previous
1986 instructions. Thus we need to pass them to both sched_analyze_1
1987 and sched_analyze_2. We must call sched_analyze_2 first in order
1988 to get the proper antecedent for the read. */
1989 sched_analyze_2 (XEXP (x
, 0), insn
);
1990 sched_analyze_1 (x
, insn
);
1994 /* Other cases: walk the insn. */
1995 fmt
= GET_RTX_FORMAT (code
);
1996 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
1999 sched_analyze_2 (XEXP (x
, i
), insn
);
2000 else if (fmt
[i
] == 'E')
2001 for (j
= 0; j
< XVECLEN (x
, i
); j
++)
2002 sched_analyze_2 (XVECEXP (x
, i
, j
), insn
);
2006 /* Analyze an INSN with pattern X to find all dependencies. */
2009 sched_analyze_insn (x
, insn
)
2012 register RTX_CODE code
= GET_CODE (x
);
2015 if (code
== SET
|| code
== CLOBBER
)
2016 sched_analyze_1 (x
, insn
);
2017 else if (code
== PARALLEL
)
2020 for (i
= XVECLEN (x
, 0) - 1; i
>= 0; i
--)
2022 code
= GET_CODE (XVECEXP (x
, 0, i
));
2023 if (code
== SET
|| code
== CLOBBER
)
2024 sched_analyze_1 (XVECEXP (x
, 0, i
), insn
);
2026 sched_analyze_2 (XVECEXP (x
, 0, i
), insn
);
2030 sched_analyze_2 (x
, insn
);
2032 /* Handle function calls and function returns created by the epilogue
2034 if (GET_CODE (insn
) == CALL_INSN
|| GET_CODE (insn
) == JUMP_INSN
)
2039 /* When scheduling instructions, we make sure calls don't lose their
2040 accompanying USE insns by depending them one on another in order.
2042 Also, we must do the same thing for returns created by the epilogue
2043 threading code. Note this code works only in this special case,
2044 because other passes make no guarantee that they will never emit
2045 an instruction between a USE and a RETURN. There is such a guarantee
2046 for USE instructions immediately before a call. */
2048 prev_dep_insn
= insn
;
2049 dep_insn
= PREV_INSN (insn
);
2050 while (GET_CODE (dep_insn
) == INSN
2051 && GET_CODE (PATTERN (dep_insn
)) == USE
)
2053 SCHED_GROUP_P (prev_dep_insn
) = 1;
2055 /* Make a copy of all dependencies on dep_insn, and add to insn.
2056 This is so that all of the dependencies will apply to the
2059 for (link
= LOG_LINKS (dep_insn
); link
; link
= XEXP (link
, 1))
2060 add_dependence (insn
, XEXP (link
, 0), REG_NOTE_KIND (link
));
2062 prev_dep_insn
= dep_insn
;
2063 dep_insn
= PREV_INSN (dep_insn
);
2068 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
2069 for every dependency. */
2072 sched_analyze (head
, tail
)
2076 register int n_insns
= 0;
2078 register int luid
= 0;
2080 for (insn
= head
; ; insn
= NEXT_INSN (insn
))
2082 INSN_LUID (insn
) = luid
++;
2084 if (GET_CODE (insn
) == INSN
|| GET_CODE (insn
) == JUMP_INSN
)
2086 sched_analyze_insn (PATTERN (insn
), insn
);
2089 else if (GET_CODE (insn
) == CALL_INSN
)
2095 /* Any instruction using a hard register which may get clobbered
2096 by a call needs to be marked as dependent on this call.
2097 This prevents a use of a hard return reg from being moved
2098 past a void call (i.e. it does not explicitly set the hard
2101 for (i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
2102 if (call_used_regs
[i
] || global_regs
[i
])
2104 for (u
= reg_last_uses
[i
]; u
; u
= XEXP (u
, 1))
2105 add_dependence (insn
, XEXP (u
, 0), REG_DEP_ANTI
);
2106 reg_last_uses
[i
] = 0;
2107 if (reg_last_sets
[i
])
2108 add_dependence (insn
, reg_last_sets
[i
], REG_DEP_ANTI
);
2109 reg_last_sets
[i
] = insn
;
2110 /* Insn, being a CALL_INSN, magically depends on
2111 `last_function_call' already. */
2114 /* For each insn which shouldn't cross a call, add a dependence
2115 between that insn and this call insn. */
2116 x
= LOG_LINKS (sched_before_next_call
);
2119 add_dependence (insn
, XEXP (x
, 0), REG_DEP_ANTI
);
2122 LOG_LINKS (sched_before_next_call
) = 0;
2124 sched_analyze_insn (PATTERN (insn
), insn
);
2126 /* We don't need to flush memory for a function call which does
2127 not involve memory. */
2128 if (! CONST_CALL_P (insn
))
2130 /* In the absence of interprocedural alias analysis,
2131 we must flush all pending reads and writes, and
2132 start new dependencies starting from here. */
2133 flush_pending_lists (insn
);
2136 /* Depend this function call (actually, the user of this
2137 function call) on all hard register clobberage. */
2138 last_function_call
= insn
;
2147 /* Called when we see a set of a register. If death is true, then we are
2148 scanning backwards. Mark that register as unborn. If nobody says
2149 otherwise, that is how things will remain. If death is false, then we
2150 are scanning forwards. Mark that register as being born. */
2153 sched_note_set (b
, x
, death
)
2158 register int regno
, j
;
2159 register rtx reg
= SET_DEST (x
);
2165 while (GET_CODE (reg
) == SUBREG
|| GET_CODE (reg
) == STRICT_LOW_PART
2166 || GET_CODE (reg
) == SIGN_EXTRACT
|| GET_CODE (reg
) == ZERO_EXTRACT
)
2168 /* Must treat modification of just one hardware register of a multi-reg
2169 value or just a byte field of a register exactly the same way that
2170 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
2171 does not kill the entire register. */
2172 if (GET_CODE (reg
) != SUBREG
2173 || REG_SIZE (SUBREG_REG (reg
)) > REG_SIZE (reg
))
2176 reg
= SUBREG_REG (reg
);
2179 if (GET_CODE (reg
) != REG
)
2182 /* Global registers are always live, so the code below does not apply
2185 regno
= REGNO (reg
);
2186 if (regno
>= FIRST_PSEUDO_REGISTER
|| ! global_regs
[regno
])
2188 register int offset
= regno
/ REGSET_ELT_BITS
;
2189 register REGSET_ELT_TYPE bit
2190 = (REGSET_ELT_TYPE
) 1 << (regno
% REGSET_ELT_BITS
);
2194 /* If we only set part of the register, then this set does not
2199 /* Try killing this register. */
2200 if (regno
< FIRST_PSEUDO_REGISTER
)
2202 int j
= HARD_REGNO_NREGS (regno
, GET_MODE (reg
));
2205 offset
= (regno
+ j
) / REGSET_ELT_BITS
;
2206 bit
= (REGSET_ELT_TYPE
) 1 << ((regno
+ j
) % REGSET_ELT_BITS
);
2208 bb_live_regs
[offset
] &= ~bit
;
2209 bb_dead_regs
[offset
] |= bit
;
2214 bb_live_regs
[offset
] &= ~bit
;
2215 bb_dead_regs
[offset
] |= bit
;
2220 /* Make the register live again. */
2221 if (regno
< FIRST_PSEUDO_REGISTER
)
2223 int j
= HARD_REGNO_NREGS (regno
, GET_MODE (reg
));
2226 offset
= (regno
+ j
) / REGSET_ELT_BITS
;
2227 bit
= (REGSET_ELT_TYPE
) 1 << ((regno
+ j
) % REGSET_ELT_BITS
);
2229 bb_live_regs
[offset
] |= bit
;
2230 bb_dead_regs
[offset
] &= ~bit
;
2235 bb_live_regs
[offset
] |= bit
;
2236 bb_dead_regs
[offset
] &= ~bit
;
2242 /* Macros and functions for keeping the priority queue sorted, and
2243 dealing with queueing and unqueueing of instructions. */
2245 #define SCHED_SORT(READY, NEW_READY, OLD_READY) \
2246 do { if ((NEW_READY) - (OLD_READY) == 1) \
2247 swap_sort (READY, NEW_READY); \
2248 else if ((NEW_READY) - (OLD_READY) > 1) \
2249 qsort (READY, NEW_READY, sizeof (rtx), rank_for_schedule); } \
2252 /* Returns a positive value if y is preferred; returns a negative value if
2253 x is preferred. Should never return 0, since that will make the sort
2257 rank_for_schedule (x
, y
)
2263 int tmp_class
, tmp2_class
;
2266 /* Choose the instruction with the highest priority, if different. */
2267 if (value
= INSN_PRIORITY (tmp
) - INSN_PRIORITY (tmp2
))
2270 if (last_scheduled_insn
)
2272 /* Classify the instructions into three classes:
2273 1) Data dependent on last schedule insn.
2274 2) Anti/Output dependent on last scheduled insn.
2275 3) Independent of last scheduled insn, or has latency of one.
2276 Choose the insn from the highest numbered class if different. */
2277 link
= find_insn_list (tmp
, LOG_LINKS (last_scheduled_insn
));
2278 if (link
== 0 || insn_cost (tmp
, link
, last_scheduled_insn
) == 1)
2280 else if (REG_NOTE_KIND (link
) == 0) /* Data dependence. */
2285 link
= find_insn_list (tmp2
, LOG_LINKS (last_scheduled_insn
));
2286 if (link
== 0 || insn_cost (tmp2
, link
, last_scheduled_insn
) == 1)
2288 else if (REG_NOTE_KIND (link
) == 0) /* Data dependence. */
2293 if (value
= tmp_class
- tmp2_class
)
2297 /* If insns are equally good, sort by INSN_LUID (original insn order),
2298 so that we make the sort stable. This minimizes instruction movement,
2299 thus minimizing sched's effect on debugging and cross-jumping. */
2300 return INSN_LUID (tmp
) - INSN_LUID (tmp2
);
2303 /* Resort the array A in which only element at index N may be out of order. */
2305 __inline
static void
2313 while (i
>= 0 && rank_for_schedule (a
+i
, &insn
) >= 0)
2321 static int max_priority
;
2323 /* Add INSN to the insn queue so that it fires at least N_CYCLES
2324 before the currently executing insn. */
2326 __inline
static void
2327 queue_insn (insn
, n_cycles
)
2331 int next_q
= NEXT_Q_AFTER (q_ptr
, n_cycles
);
2332 NEXT_INSN (insn
) = insn_queue
[next_q
];
2333 insn_queue
[next_q
] = insn
;
2337 /* Return nonzero if PAT is the pattern of an insn which makes a
2341 birthing_insn_p (pat
)
2346 if (reload_completed
== 1)
2349 if (GET_CODE (pat
) == SET
2350 && GET_CODE (SET_DEST (pat
)) == REG
)
2352 rtx dest
= SET_DEST (pat
);
2353 int i
= REGNO (dest
);
2354 int offset
= i
/ REGSET_ELT_BITS
;
2355 REGSET_ELT_TYPE bit
= (REGSET_ELT_TYPE
) 1 << (i
% REGSET_ELT_BITS
);
2357 /* It would be more accurate to use refers_to_regno_p or
2358 reg_mentioned_p to determine when the dest is not live before this
2361 if (bb_live_regs
[offset
] & bit
)
2362 return (reg_n_sets
[i
] == 1);
2366 if (GET_CODE (pat
) == PARALLEL
)
2368 for (j
= 0; j
< XVECLEN (pat
, 0); j
++)
2369 if (birthing_insn_p (XVECEXP (pat
, 0, j
)))
2375 /* PREV is an insn that is ready to execute. Adjust its priority if that
2376 will help shorten register lifetimes. */
2378 __inline
static void
2379 adjust_priority (prev
)
2382 /* Trying to shorten register lives after reload has completed
2383 is useless and wrong. It gives inaccurate schedules. */
2384 if (reload_completed
== 0)
2389 /* ??? This code has no effect, because REG_DEAD notes are removed
2390 before we ever get here. */
2391 for (note
= REG_NOTES (prev
); note
; note
= XEXP (note
, 1))
2392 if (REG_NOTE_KIND (note
) == REG_DEAD
)
2395 /* Defer scheduling insns which kill registers, since that
2396 shortens register lives. Prefer scheduling insns which
2397 make registers live for the same reason. */
2401 INSN_PRIORITY (prev
) >>= 3;
2404 INSN_PRIORITY (prev
) >>= 2;
2408 INSN_PRIORITY (prev
) >>= 1;
2411 if (birthing_insn_p (PATTERN (prev
)))
2413 int max
= max_priority
;
2415 if (max
> INSN_PRIORITY (prev
))
2416 INSN_PRIORITY (prev
) = max
;
2423 /* INSN is the "currently executing insn". Launch each insn which was
2424 waiting on INSN (in the backwards dataflow sense). READY is a
2425 vector of insns which are ready to fire. N_READY is the number of
2426 elements in READY. CLOCK is the current virtual cycle. */
2429 schedule_insn (insn
, ready
, n_ready
, clock
)
2436 int new_ready
= n_ready
;
2438 if (MAX_BLOCKAGE
> 1)
2439 schedule_unit (insn_unit (insn
), insn
, clock
);
2441 if (LOG_LINKS (insn
) == 0)
2444 /* This is used by the function adjust_priority above. */
2446 max_priority
= MAX (INSN_PRIORITY (ready
[0]), INSN_PRIORITY (insn
));
2448 max_priority
= INSN_PRIORITY (insn
);
2450 for (link
= LOG_LINKS (insn
); link
!= 0; link
= XEXP (link
, 1))
2452 rtx prev
= XEXP (link
, 0);
2453 int cost
= insn_cost (prev
, link
, insn
);
2455 if ((INSN_REF_COUNT (prev
) -= 1) != 0)
2457 /* We satisfied one requirement to fire PREV. Record the earliest
2458 time when PREV can fire. No need to do this if the cost is 1,
2459 because PREV can fire no sooner than the next cycle. */
2461 INSN_TICK (prev
) = MAX (INSN_TICK (prev
), clock
+ cost
);
2465 /* We satisfied the last requirement to fire PREV. Ensure that all
2466 timing requirements are satisfied. */
2467 if (INSN_TICK (prev
) - clock
> cost
)
2468 cost
= INSN_TICK (prev
) - clock
;
2470 /* Adjust the priority of PREV and either put it on the ready
2471 list or queue it. */
2472 adjust_priority (prev
);
2474 ready
[new_ready
++] = prev
;
2476 queue_insn (prev
, cost
);
2483 /* Given N_READY insns in the ready list READY at time CLOCK, queue
2484 those that are blocked due to function unit hazards and rearrange
2485 the remaining ones to minimize subsequent function unit hazards. */
2488 schedule_select (ready
, n_ready
, clock
, file
)
2493 int pri
= INSN_PRIORITY (ready
[0]);
2494 int i
, j
, k
, q
, cost
, best_cost
, best_insn
= 0, new_ready
= n_ready
;
2497 /* Work down the ready list in groups of instructions with the same
2498 priority value. Queue insns in the group that are blocked and
2499 select among those that remain for the one with the largest
2500 potential hazard. */
2501 for (i
= 0; i
< n_ready
; i
= j
)
2504 for (j
= i
+ 1; j
< n_ready
; j
++)
2505 if ((pri
= INSN_PRIORITY (ready
[j
])) != opri
)
2508 /* Queue insns in the group that are blocked. */
2509 for (k
= i
, q
= 0; k
< j
; k
++)
2512 if ((cost
= actual_hazard (insn_unit (insn
), insn
, clock
, 0)) != 0)
2516 queue_insn (insn
, cost
);
2518 fprintf (file
, "\n;; blocking insn %d for %d cycles",
2519 INSN_UID (insn
), cost
);
2524 /* Check the next group if all insns were queued. */
2528 /* If more than one remains, select the first one with the largest
2529 potential hazard. */
2530 else if (j
- i
- q
> 1)
2533 for (k
= i
; k
< j
; k
++)
2535 if ((insn
= ready
[k
]) == 0)
2537 if ((cost
= potential_hazard (insn_unit (insn
), insn
, 0))
2545 /* We have found a suitable insn to schedule. */
2549 /* Move the best insn to be front of the ready list. */
2554 fprintf (file
, ", now");
2555 for (i
= 0; i
< n_ready
; i
++)
2557 fprintf (file
, " %d", INSN_UID (ready
[i
]));
2558 fprintf (file
, "\n;; insn %d has a greater potential hazard",
2559 INSN_UID (ready
[best_insn
]));
2561 for (i
= best_insn
; i
> 0; i
--)
2564 ready
[i
-1] = ready
[i
];
2569 /* Compact the ready list. */
2570 if (new_ready
< n_ready
)
2571 for (i
= j
= 0; i
< n_ready
; i
++)
2573 ready
[j
++] = ready
[i
];
2578 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
2582 create_reg_dead_note (reg
, insn
)
2587 /* The number of registers killed after scheduling must be the same as the
2588 number of registers killed before scheduling. The number of REG_DEAD
2589 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
2590 might become one DImode hard register REG_DEAD note, but the number of
2591 registers killed will be conserved.
2593 We carefully remove REG_DEAD notes from the dead_notes list, so that
2594 there will be none left at the end. If we run out early, then there
2595 is a bug somewhere in flow, combine and/or sched. */
2597 if (dead_notes
== 0)
2602 link
= rtx_alloc (EXPR_LIST
);
2603 PUT_REG_NOTE_KIND (link
, REG_DEAD
);
2608 /* Number of regs killed by REG. */
2609 int regs_killed
= (REGNO (reg
) >= FIRST_PSEUDO_REGISTER
? 1
2610 : HARD_REGNO_NREGS (REGNO (reg
), GET_MODE (reg
)));
2611 /* Number of regs killed by REG_DEAD notes taken off the list. */
2615 reg_note_regs
= (REGNO (XEXP (link
, 0)) >= FIRST_PSEUDO_REGISTER
? 1
2616 : HARD_REGNO_NREGS (REGNO (XEXP (link
, 0)),
2617 GET_MODE (XEXP (link
, 0))));
2618 while (reg_note_regs
< regs_killed
)
2620 link
= XEXP (link
, 1);
2621 reg_note_regs
+= (REGNO (XEXP (link
, 0)) >= FIRST_PSEUDO_REGISTER
? 1
2622 : HARD_REGNO_NREGS (REGNO (XEXP (link
, 0)),
2623 GET_MODE (XEXP (link
, 0))));
2625 dead_notes
= XEXP (link
, 1);
2627 /* If we took too many regs kills off, put the extra ones back. */
2628 while (reg_note_regs
> regs_killed
)
2630 rtx temp_reg
, temp_link
;
2632 temp_reg
= gen_rtx (REG
, word_mode
, 0);
2633 temp_link
= rtx_alloc (EXPR_LIST
);
2634 PUT_REG_NOTE_KIND (temp_link
, REG_DEAD
);
2635 XEXP (temp_link
, 0) = temp_reg
;
2636 XEXP (temp_link
, 1) = dead_notes
;
2637 dead_notes
= temp_link
;
2642 XEXP (link
, 0) = reg
;
2643 XEXP (link
, 1) = REG_NOTES (insn
);
2644 REG_NOTES (insn
) = link
;
2647 /* Subroutine on attach_deaths_insn--handles the recursive search
2648 through INSN. If SET_P is true, then x is being modified by the insn. */
2651 attach_deaths (x
, insn
, set_p
)
2658 register enum rtx_code code
;
2664 code
= GET_CODE (x
);
2676 /* Get rid of the easy cases first. */
2681 /* If the register dies in this insn, queue that note, and mark
2682 this register as needing to die. */
2683 /* This code is very similar to mark_used_1 (if set_p is false)
2684 and mark_set_1 (if set_p is true) in flow.c. */
2686 register int regno
= REGNO (x
);
2687 register int offset
= regno
/ REGSET_ELT_BITS
;
2688 register REGSET_ELT_TYPE bit
2689 = (REGSET_ELT_TYPE
) 1 << (regno
% REGSET_ELT_BITS
);
2690 REGSET_ELT_TYPE all_needed
= (old_live_regs
[offset
] & bit
);
2691 REGSET_ELT_TYPE some_needed
= (old_live_regs
[offset
] & bit
);
2696 if (regno
< FIRST_PSEUDO_REGISTER
)
2700 n
= HARD_REGNO_NREGS (regno
, GET_MODE (x
));
2703 some_needed
|= (old_live_regs
[(regno
+ n
) / REGSET_ELT_BITS
]
2704 & ((REGSET_ELT_TYPE
) 1
2705 << ((regno
+ n
) % REGSET_ELT_BITS
)));
2706 all_needed
&= (old_live_regs
[(regno
+ n
) / REGSET_ELT_BITS
]
2707 & ((REGSET_ELT_TYPE
) 1
2708 << ((regno
+ n
) % REGSET_ELT_BITS
)));
2712 /* If it wasn't live before we started, then add a REG_DEAD note.
2713 We must check the previous lifetime info not the current info,
2714 because we may have to execute this code several times, e.g.
2715 once for a clobber (which doesn't add a note) and later
2716 for a use (which does add a note).
2718 Always make the register live. We must do this even if it was
2719 live before, because this may be an insn which sets and uses
2720 the same register, in which case the register has already been
2721 killed, so we must make it live again.
2723 Global registers are always live, and should never have a REG_DEAD
2724 note added for them, so none of the code below applies to them. */
2726 if (regno
>= FIRST_PSEUDO_REGISTER
|| ! global_regs
[regno
])
2728 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
2729 STACK_POINTER_REGNUM, since these are always considered to be
2730 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
2731 if (regno
!= FRAME_POINTER_REGNUM
2732 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2733 && ! (regno
== HARD_FRAME_POINTER_REGNUM
)
2735 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
2736 && ! (regno
== ARG_POINTER_REGNUM
&& fixed_regs
[regno
])
2738 && regno
!= STACK_POINTER_REGNUM
)
2740 if (! all_needed
&& ! dead_or_set_p (insn
, x
))
2742 /* If none of the words in X is needed, make a REG_DEAD
2743 note. Otherwise, we must make partial REG_DEAD
2746 create_reg_dead_note (x
, insn
);
2751 /* Don't make a REG_DEAD note for a part of a
2752 register that is set in the insn. */
2753 for (i
= HARD_REGNO_NREGS (regno
, GET_MODE (x
)) - 1;
2755 if ((old_live_regs
[(regno
+ i
) / REGSET_ELT_BITS
]
2756 & ((REGSET_ELT_TYPE
) 1
2757 << ((regno
+i
) % REGSET_ELT_BITS
))) == 0
2758 && ! dead_or_set_regno_p (insn
, regno
+ i
))
2759 create_reg_dead_note (gen_rtx (REG
, word_mode
,
2766 if (regno
< FIRST_PSEUDO_REGISTER
)
2768 int j
= HARD_REGNO_NREGS (regno
, GET_MODE (x
));
2771 offset
= (regno
+ j
) / REGSET_ELT_BITS
;
2773 = (REGSET_ELT_TYPE
) 1 << ((regno
+ j
) % REGSET_ELT_BITS
);
2775 bb_dead_regs
[offset
] &= ~bit
;
2776 bb_live_regs
[offset
] |= bit
;
2781 bb_dead_regs
[offset
] &= ~bit
;
2782 bb_live_regs
[offset
] |= bit
;
2789 /* Handle tail-recursive case. */
2790 attach_deaths (XEXP (x
, 0), insn
, 0);
2794 case STRICT_LOW_PART
:
2795 /* These two cases preserve the value of SET_P, so handle them
2797 attach_deaths (XEXP (x
, 0), insn
, set_p
);
2802 /* This case preserves the value of SET_P for the first operand, but
2803 clears it for the other two. */
2804 attach_deaths (XEXP (x
, 0), insn
, set_p
);
2805 attach_deaths (XEXP (x
, 1), insn
, 0);
2806 attach_deaths (XEXP (x
, 2), insn
, 0);
2810 /* Other cases: walk the insn. */
2811 fmt
= GET_RTX_FORMAT (code
);
2812 for (i
= GET_RTX_LENGTH (code
) - 1; i
>= 0; i
--)
2815 attach_deaths (XEXP (x
, i
), insn
, 0);
2816 else if (fmt
[i
] == 'E')
2817 for (j
= 0; j
< XVECLEN (x
, i
); j
++)
2818 attach_deaths (XVECEXP (x
, i
, j
), insn
, 0);
2823 /* After INSN has executed, add register death notes for each register
2824 that is dead after INSN. */
2827 attach_deaths_insn (insn
)
2830 rtx x
= PATTERN (insn
);
2831 register RTX_CODE code
= GET_CODE (x
);
2835 attach_deaths (SET_SRC (x
), insn
, 0);
2837 /* A register might die here even if it is the destination, e.g.
2838 it is the target of a volatile read and is otherwise unused.
2839 Hence we must always call attach_deaths for the SET_DEST. */
2840 attach_deaths (SET_DEST (x
), insn
, 1);
2842 else if (code
== PARALLEL
)
2845 for (i
= XVECLEN (x
, 0) - 1; i
>= 0; i
--)
2847 code
= GET_CODE (XVECEXP (x
, 0, i
));
2850 attach_deaths (SET_SRC (XVECEXP (x
, 0, i
)), insn
, 0);
2852 attach_deaths (SET_DEST (XVECEXP (x
, 0, i
)), insn
, 1);
2854 /* Flow does not add REG_DEAD notes to registers that die in
2855 clobbers, so we can't either. */
2856 else if (code
!= CLOBBER
)
2857 attach_deaths (XVECEXP (x
, 0, i
), insn
, 0);
2860 /* Flow does not add REG_DEAD notes to registers that die in
2861 clobbers, so we can't either. */
2862 else if (code
!= CLOBBER
)
2863 attach_deaths (x
, insn
, 0);
2866 /* Delete notes beginning with INSN and maybe put them in the chain
2867 of notes ended by NOTE_LIST.
2868 Returns the insn following the notes. */
2871 unlink_notes (insn
, tail
)
2874 rtx prev
= PREV_INSN (insn
);
2876 while (insn
!= tail
&& GET_CODE (insn
) == NOTE
)
2878 rtx next
= NEXT_INSN (insn
);
2879 /* Delete the note from its current position. */
2881 NEXT_INSN (prev
) = next
;
2883 PREV_INSN (next
) = prev
;
2885 if (write_symbols
!= NO_DEBUG
&& NOTE_LINE_NUMBER (insn
) > 0)
2886 /* Record line-number notes so they can be reused. */
2887 LINE_NOTE (insn
) = insn
;
2890 /* Insert the note at the end of the notes list. */
2891 PREV_INSN (insn
) = note_list
;
2893 NEXT_INSN (note_list
) = insn
;
2902 /* Constructor for `sometimes' data structure. */
2905 new_sometimes_live (regs_sometimes_live
, offset
, bit
, sometimes_max
)
2906 struct sometimes
*regs_sometimes_live
;
2910 register struct sometimes
*p
;
2911 register int regno
= offset
* REGSET_ELT_BITS
+ bit
;
2914 /* There should never be a register greater than max_regno here. If there
2915 is, it means that a define_split has created a new pseudo reg. This
2916 is not allowed, since there will not be flow info available for any
2917 new register, so catch the error here. */
2918 if (regno
>= max_regno
)
2921 p
= ®s_sometimes_live
[sometimes_max
];
2925 p
->calls_crossed
= 0;
2927 return sometimes_max
;
2930 /* Count lengths of all regs we are currently tracking,
2931 and find new registers no longer live. */
2934 finish_sometimes_live (regs_sometimes_live
, sometimes_max
)
2935 struct sometimes
*regs_sometimes_live
;
2940 for (i
= 0; i
< sometimes_max
; i
++)
2942 register struct sometimes
*p
= ®s_sometimes_live
[i
];
2945 regno
= p
->offset
* REGSET_ELT_BITS
+ p
->bit
;
2947 sched_reg_live_length
[regno
] += p
->live_length
;
2948 sched_reg_n_calls_crossed
[regno
] += p
->calls_crossed
;
2952 /* Use modified list scheduling to rearrange insns in basic block
2953 B. FILE, if nonzero, is where we dump interesting output about
2957 schedule_block (b
, file
)
2964 int i
, j
, n_ready
= 0, new_ready
, n_insns
= 0;
2965 int sched_n_insns
= 0;
2967 #define NEED_NOTHING 0
2972 /* HEAD and TAIL delimit the region being scheduled. */
2973 rtx head
= basic_block_head
[b
];
2974 rtx tail
= basic_block_end
[b
];
2975 /* PREV_HEAD and NEXT_TAIL are the boundaries of the insns
2976 being scheduled. When the insns have been ordered,
2977 these insns delimit where the new insns are to be
2978 spliced back into the insn chain. */
2982 /* Keep life information accurate. */
2983 register struct sometimes
*regs_sometimes_live
;
2987 fprintf (file
, ";;\t -- basic block number %d from %d to %d --\n",
2988 b
, INSN_UID (basic_block_head
[b
]), INSN_UID (basic_block_end
[b
]));
2991 reg_last_uses
= (rtx
*) alloca (i
* sizeof (rtx
));
2992 bzero (reg_last_uses
, i
* sizeof (rtx
));
2993 reg_last_sets
= (rtx
*) alloca (i
* sizeof (rtx
));
2994 bzero (reg_last_sets
, i
* sizeof (rtx
));
2997 /* Remove certain insns at the beginning from scheduling,
2998 by advancing HEAD. */
3000 /* At the start of a function, before reload has run, don't delay getting
3001 parameters from hard registers into pseudo registers. */
3002 if (reload_completed
== 0 && b
== 0)
3005 && GET_CODE (head
) == NOTE
3006 && NOTE_LINE_NUMBER (head
) != NOTE_INSN_FUNCTION_BEG
)
3007 head
= NEXT_INSN (head
);
3009 && GET_CODE (head
) == INSN
3010 && GET_CODE (PATTERN (head
)) == SET
)
3012 rtx src
= SET_SRC (PATTERN (head
));
3013 while (GET_CODE (src
) == SUBREG
3014 || GET_CODE (src
) == SIGN_EXTEND
3015 || GET_CODE (src
) == ZERO_EXTEND
3016 || GET_CODE (src
) == SIGN_EXTRACT
3017 || GET_CODE (src
) == ZERO_EXTRACT
)
3018 src
= XEXP (src
, 0);
3019 if (GET_CODE (src
) != REG
3020 || REGNO (src
) >= FIRST_PSEUDO_REGISTER
)
3022 /* Keep this insn from ever being scheduled. */
3023 INSN_REF_COUNT (head
) = 1;
3024 head
= NEXT_INSN (head
);
3028 /* Don't include any notes or labels at the beginning of the
3029 basic block, or notes at the ends of basic blocks. */
3030 while (head
!= tail
)
3032 if (GET_CODE (head
) == NOTE
)
3033 head
= NEXT_INSN (head
);
3034 else if (GET_CODE (tail
) == NOTE
)
3035 tail
= PREV_INSN (tail
);
3036 else if (GET_CODE (head
) == CODE_LABEL
)
3037 head
= NEXT_INSN (head
);
3040 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
3041 to schedule this block. */
3043 && (GET_CODE (head
) == NOTE
|| GET_CODE (head
) == CODE_LABEL
))
3047 /* This short-cut doesn't work. It does not count call insns crossed by
3048 registers in reg_sometimes_live. It does not mark these registers as
3049 dead if they die in this block. It does not mark these registers live
3050 (or create new reg_sometimes_live entries if necessary) if they are born
3053 The easy solution is to just always schedule a block. This block only
3054 has one insn, so this won't slow down this pass by much. */
3060 /* Now HEAD through TAIL are the insns actually to be rearranged;
3061 Let PREV_HEAD and NEXT_TAIL enclose them. */
3062 prev_head
= PREV_INSN (head
);
3063 next_tail
= NEXT_INSN (tail
);
3065 /* Initialize basic block data structures. */
3067 pending_read_insns
= 0;
3068 pending_read_mems
= 0;
3069 pending_write_insns
= 0;
3070 pending_write_mems
= 0;
3071 pending_lists_length
= 0;
3072 last_pending_memory_flush
= 0;
3073 last_function_call
= 0;
3074 last_scheduled_insn
= 0;
3076 LOG_LINKS (sched_before_next_call
) = 0;
3078 n_insns
+= sched_analyze (head
, tail
);
3081 free_pending_lists ();
3085 /* Allocate vector to hold insns to be rearranged (except those
3086 insns which are controlled by an insn with SCHED_GROUP_P set).
3087 All these insns are included between ORIG_HEAD and ORIG_TAIL,
3088 as those variables ultimately are set up. */
3089 ready
= (rtx
*) alloca ((n_insns
+1) * sizeof (rtx
));
3091 /* TAIL is now the last of the insns to be rearranged.
3092 Put those insns into the READY vector. */
3095 /* For all branches, calls, uses, and cc0 setters, force them to remain
3096 in order at the end of the block by adding dependencies and giving
3097 the last a high priority. There may be notes present, and prev_head
3100 Branches must obviously remain at the end. Calls should remain at the
3101 end since moving them results in worse register allocation. Uses remain
3102 at the end to ensure proper register allocation. cc0 setters remaim
3103 at the end because they can't be moved away from their cc0 user. */
3105 while (GET_CODE (insn
) == CALL_INSN
|| GET_CODE (insn
) == JUMP_INSN
3106 || (GET_CODE (insn
) == INSN
3107 && (GET_CODE (PATTERN (insn
)) == USE
3109 || sets_cc0_p (PATTERN (insn
))
3112 || GET_CODE (insn
) == NOTE
)
3114 if (GET_CODE (insn
) != NOTE
)
3119 ready
[n_ready
++] = insn
;
3120 INSN_PRIORITY (insn
) = TAIL_PRIORITY
- i
;
3121 INSN_REF_COUNT (insn
) = 0;
3123 else if (! find_insn_list (insn
, LOG_LINKS (last
)))
3125 add_dependence (last
, insn
, REG_DEP_ANTI
);
3126 INSN_REF_COUNT (insn
)++;
3130 /* Skip over insns that are part of a group. */
3131 while (SCHED_GROUP_P (insn
))
3133 insn
= prev_nonnote_insn (insn
);
3138 insn
= PREV_INSN (insn
);
3139 /* Don't overrun the bounds of the basic block. */
3140 if (insn
== prev_head
)
3144 /* Assign priorities to instructions. Also check whether they
3145 are in priority order already. If so then I will be nonnegative.
3146 We use this shortcut only before reloading. */
3148 i
= reload_completed
? DONE_PRIORITY
: MAX_PRIORITY
;
3151 for (; insn
!= prev_head
; insn
= PREV_INSN (insn
))
3153 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i')
3156 if (INSN_REF_COUNT (insn
) == 0)
3159 ready
[n_ready
++] = insn
;
3162 /* Make this dependent on the last of the instructions
3163 that must remain in order at the end of the block. */
3164 add_dependence (last
, insn
, REG_DEP_ANTI
);
3165 INSN_REF_COUNT (insn
) = 1;
3168 if (SCHED_GROUP_P (insn
))
3170 while (SCHED_GROUP_P (insn
))
3172 insn
= PREV_INSN (insn
);
3173 while (GET_CODE (insn
) == NOTE
)
3174 insn
= PREV_INSN (insn
);
3182 if (INSN_PRIORITY (insn
) < i
)
3183 i
= INSN_PRIORITY (insn
);
3184 else if (INSN_PRIORITY (insn
) > i
)
3191 /* This short-cut doesn't work. It does not count call insns crossed by
3192 registers in reg_sometimes_live. It does not mark these registers as
3193 dead if they die in this block. It does not mark these registers live
3194 (or create new reg_sometimes_live entries if necessary) if they are born
3197 The easy solution is to just always schedule a block. These blocks tend
3198 to be very short, so this doesn't slow down this pass by much. */
3200 /* If existing order is good, don't bother to reorder. */
3201 if (i
!= DONE_PRIORITY
)
3204 fprintf (file
, ";; already scheduled\n");
3206 if (reload_completed
== 0)
3208 for (i
= 0; i
< sometimes_max
; i
++)
3209 regs_sometimes_live
[i
].live_length
+= n_insns
;
3211 finish_sometimes_live (regs_sometimes_live
, sometimes_max
);
3213 free_pending_lists ();
3218 /* Scan all the insns to be scheduled, removing NOTE insns
3219 and register death notes.
3220 Line number NOTE insns end up in NOTE_LIST.
3221 Register death notes end up in DEAD_NOTES.
3223 Recreate the register life information for the end of this basic
3226 if (reload_completed
== 0)
3228 bcopy (basic_block_live_at_start
[b
], bb_live_regs
, regset_bytes
);
3229 bzero (bb_dead_regs
, regset_bytes
);
3233 /* This is the first block in the function. There may be insns
3234 before head that we can't schedule. We still need to examine
3235 them though for accurate register lifetime analysis. */
3237 /* We don't want to remove any REG_DEAD notes as the code below
3240 for (insn
= basic_block_head
[b
]; insn
!= head
;
3241 insn
= NEXT_INSN (insn
))
3242 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i')
3244 /* See if the register gets born here. */
3245 /* We must check for registers being born before we check for
3246 registers dying. It is possible for a register to be born
3247 and die in the same insn, e.g. reading from a volatile
3248 memory location into an otherwise unused register. Such
3249 a register must be marked as dead after this insn. */
3250 if (GET_CODE (PATTERN (insn
)) == SET
3251 || GET_CODE (PATTERN (insn
)) == CLOBBER
)
3252 sched_note_set (b
, PATTERN (insn
), 0);
3253 else if (GET_CODE (PATTERN (insn
)) == PARALLEL
)
3256 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
3257 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == SET
3258 || GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == CLOBBER
)
3259 sched_note_set (b
, XVECEXP (PATTERN (insn
), 0, j
), 0);
3261 /* ??? This code is obsolete and should be deleted. It
3262 is harmless though, so we will leave it in for now. */
3263 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
3264 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == USE
)
3265 sched_note_set (b
, XVECEXP (PATTERN (insn
), 0, j
), 0);
3268 for (link
= REG_NOTES (insn
); link
; link
= XEXP (link
, 1))
3270 if ((REG_NOTE_KIND (link
) == REG_DEAD
3271 || REG_NOTE_KIND (link
) == REG_UNUSED
)
3272 /* Verify that the REG_NOTE has a legal value. */
3273 && GET_CODE (XEXP (link
, 0)) == REG
)
3275 register int regno
= REGNO (XEXP (link
, 0));
3276 register int offset
= regno
/ REGSET_ELT_BITS
;
3277 register REGSET_ELT_TYPE bit
3278 = (REGSET_ELT_TYPE
) 1 << (regno
% REGSET_ELT_BITS
);
3280 if (regno
< FIRST_PSEUDO_REGISTER
)
3282 int j
= HARD_REGNO_NREGS (regno
,
3283 GET_MODE (XEXP (link
, 0)));
3286 offset
= (regno
+ j
) / REGSET_ELT_BITS
;
3287 bit
= ((REGSET_ELT_TYPE
) 1
3288 << ((regno
+ j
) % REGSET_ELT_BITS
));
3290 bb_live_regs
[offset
] &= ~bit
;
3291 bb_dead_regs
[offset
] |= bit
;
3296 bb_live_regs
[offset
] &= ~bit
;
3297 bb_dead_regs
[offset
] |= bit
;
3305 /* If debugging information is being produced, keep track of the line
3306 number notes for each insn. */
3307 if (write_symbols
!= NO_DEBUG
)
3309 /* We must use the true line number for the first insn in the block
3310 that was computed and saved at the start of this pass. We can't
3311 use the current line number, because scheduling of the previous
3312 block may have changed the current line number. */
3313 rtx line
= line_note_head
[b
];
3315 for (insn
= basic_block_head
[b
];
3317 insn
= NEXT_INSN (insn
))
3318 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
3321 LINE_NOTE (insn
) = line
;
3324 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
3326 rtx prev
, next
, link
;
3328 /* Farm out notes. This is needed to keep the debugger from
3329 getting completely deranged. */
3330 if (GET_CODE (insn
) == NOTE
)
3333 insn
= unlink_notes (insn
, next_tail
);
3338 if (insn
== next_tail
)
3342 if (reload_completed
== 0
3343 && GET_RTX_CLASS (GET_CODE (insn
)) == 'i')
3345 /* See if the register gets born here. */
3346 /* We must check for registers being born before we check for
3347 registers dying. It is possible for a register to be born and
3348 die in the same insn, e.g. reading from a volatile memory
3349 location into an otherwise unused register. Such a register
3350 must be marked as dead after this insn. */
3351 if (GET_CODE (PATTERN (insn
)) == SET
3352 || GET_CODE (PATTERN (insn
)) == CLOBBER
)
3353 sched_note_set (b
, PATTERN (insn
), 0);
3354 else if (GET_CODE (PATTERN (insn
)) == PARALLEL
)
3357 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
3358 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == SET
3359 || GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == CLOBBER
)
3360 sched_note_set (b
, XVECEXP (PATTERN (insn
), 0, j
), 0);
3362 /* ??? This code is obsolete and should be deleted. It
3363 is harmless though, so we will leave it in for now. */
3364 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
3365 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == USE
)
3366 sched_note_set (b
, XVECEXP (PATTERN (insn
), 0, j
), 0);
3369 /* Need to know what registers this insn kills. */
3370 for (prev
= 0, link
= REG_NOTES (insn
); link
; link
= next
)
3374 next
= XEXP (link
, 1);
3375 if ((REG_NOTE_KIND (link
) == REG_DEAD
3376 || REG_NOTE_KIND (link
) == REG_UNUSED
)
3377 /* Verify that the REG_NOTE has a legal value. */
3378 && GET_CODE (XEXP (link
, 0)) == REG
)
3380 register int regno
= REGNO (XEXP (link
, 0));
3381 register int offset
= regno
/ REGSET_ELT_BITS
;
3382 register REGSET_ELT_TYPE bit
3383 = (REGSET_ELT_TYPE
) 1 << (regno
% REGSET_ELT_BITS
);
3385 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
3387 if (REG_NOTE_KIND (link
) == REG_DEAD
)
3390 XEXP (prev
, 1) = next
;
3392 REG_NOTES (insn
) = next
;
3393 XEXP (link
, 1) = dead_notes
;
3399 if (regno
< FIRST_PSEUDO_REGISTER
)
3401 int j
= HARD_REGNO_NREGS (regno
,
3402 GET_MODE (XEXP (link
, 0)));
3405 offset
= (regno
+ j
) / REGSET_ELT_BITS
;
3406 bit
= ((REGSET_ELT_TYPE
) 1
3407 << ((regno
+ j
) % REGSET_ELT_BITS
));
3409 bb_live_regs
[offset
] &= ~bit
;
3410 bb_dead_regs
[offset
] |= bit
;
3415 bb_live_regs
[offset
] &= ~bit
;
3416 bb_dead_regs
[offset
] |= bit
;
3425 if (reload_completed
== 0)
3427 /* Keep track of register lives. */
3428 old_live_regs
= (regset
) alloca (regset_bytes
);
3430 = (struct sometimes
*) alloca (max_regno
* sizeof (struct sometimes
));
3433 /* Start with registers live at end. */
3434 for (j
= 0; j
< regset_size
; j
++)
3436 REGSET_ELT_TYPE live
= bb_live_regs
[j
];
3437 old_live_regs
[j
] = live
;
3441 for (bit
= 0; bit
< REGSET_ELT_BITS
; bit
++)
3442 if (live
& ((REGSET_ELT_TYPE
) 1 << bit
))
3443 sometimes_max
= new_sometimes_live (regs_sometimes_live
, j
,
3444 bit
, sometimes_max
);
3449 SCHED_SORT (ready
, n_ready
, 1);
3453 fprintf (file
, ";; ready list initially:\n;; ");
3454 for (i
= 0; i
< n_ready
; i
++)
3455 fprintf (file
, "%d ", INSN_UID (ready
[i
]));
3456 fprintf (file
, "\n\n");
3458 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
3459 if (INSN_PRIORITY (insn
) > 0)
3460 fprintf (file
, ";; insn[%4d]: priority = %4d, ref_count = %4d\n",
3461 INSN_UID (insn
), INSN_PRIORITY (insn
),
3462 INSN_REF_COUNT (insn
));
3465 /* Now HEAD and TAIL are going to become disconnected
3466 entirely from the insn chain. */
3469 /* Q_SIZE will always be zero here. */
3470 q_ptr
= 0; clock
= 0;
3471 bzero (insn_queue
, sizeof (insn_queue
));
3473 /* Now, perform list scheduling. */
3475 /* Where we start inserting insns is after TAIL. */
3478 new_needs
= (NEXT_INSN (prev_head
) == basic_block_head
[b
]
3479 ? NEED_HEAD
: NEED_NOTHING
);
3480 if (PREV_INSN (next_tail
) == basic_block_end
[b
])
3481 new_needs
|= NEED_TAIL
;
3483 new_ready
= n_ready
;
3484 while (sched_n_insns
< n_insns
)
3486 q_ptr
= NEXT_Q (q_ptr
); clock
++;
3488 /* Add all pending insns that can be scheduled without stalls to the
3490 for (insn
= insn_queue
[q_ptr
]; insn
; insn
= NEXT_INSN (insn
))
3493 fprintf (file
, ";; launching %d before %d with no stalls at T-%d\n",
3494 INSN_UID (insn
), INSN_UID (last
), clock
);
3495 ready
[new_ready
++] = insn
;
3498 insn_queue
[q_ptr
] = 0;
3500 /* If there are no ready insns, stall until one is ready and add all
3501 of the pending insns at that point to the ready list. */
3504 register int stalls
;
3506 for (stalls
= 1; stalls
< INSN_QUEUE_SIZE
; stalls
++)
3507 if (insn
= insn_queue
[NEXT_Q_AFTER (q_ptr
, stalls
)])
3509 for (; insn
; insn
= NEXT_INSN (insn
))
3512 fprintf (file
, ";; launching %d before %d with %d stalls at T-%d\n",
3513 INSN_UID (insn
), INSN_UID (last
), stalls
, clock
);
3514 ready
[new_ready
++] = insn
;
3517 insn_queue
[NEXT_Q_AFTER (q_ptr
, stalls
)] = 0;
3521 q_ptr
= NEXT_Q_AFTER (q_ptr
, stalls
); clock
+= stalls
;
3524 /* There should be some instructions waiting to fire. */
3530 fprintf (file
, ";; ready list at T-%d:", clock
);
3531 for (i
= 0; i
< new_ready
; i
++)
3532 fprintf (file
, " %d (%x)",
3533 INSN_UID (ready
[i
]), INSN_PRIORITY (ready
[i
]));
3536 /* Sort the ready list and choose the best insn to schedule. Select
3537 which insn should issue in this cycle and queue those that are
3538 blocked by function unit hazards.
3540 N_READY holds the number of items that were scheduled the last time,
3541 minus the one instruction scheduled on the last loop iteration; it
3542 is not modified for any other reason in this loop. */
3544 SCHED_SORT (ready
, new_ready
, n_ready
);
3545 if (MAX_BLOCKAGE
> 1)
3547 new_ready
= schedule_select (ready
, new_ready
, clock
, file
);
3551 fprintf (file
, "\n");
3552 /* We must set n_ready here, to ensure that sorting always
3553 occurs when we come back to the SCHED_SORT line above. */
3558 n_ready
= new_ready
;
3559 last_scheduled_insn
= insn
= ready
[0];
3561 /* The first insn scheduled becomes the new tail. */
3567 fprintf (file
, ", now");
3568 for (i
= 0; i
< n_ready
; i
++)
3569 fprintf (file
, " %d", INSN_UID (ready
[i
]));
3570 fprintf (file
, "\n");
3573 if (DONE_PRIORITY_P (insn
))
3576 if (reload_completed
== 0)
3578 /* Process this insn, and each insn linked to this one which must
3579 be immediately output after this insn. */
3582 /* First we kill registers set by this insn, and then we
3583 make registers used by this insn live. This is the opposite
3584 order used above because we are traversing the instructions
3587 /* Strictly speaking, we should scan REG_UNUSED notes and make
3588 every register mentioned there live, however, we will just
3589 kill them again immediately below, so there doesn't seem to
3590 be any reason why we bother to do this. */
3592 /* See if this is the last notice we must take of a register. */
3593 if (GET_CODE (PATTERN (insn
)) == SET
3594 || GET_CODE (PATTERN (insn
)) == CLOBBER
)
3595 sched_note_set (b
, PATTERN (insn
), 1);
3596 else if (GET_CODE (PATTERN (insn
)) == PARALLEL
)
3599 for (j
= XVECLEN (PATTERN (insn
), 0) - 1; j
>= 0; j
--)
3600 if (GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == SET
3601 || GET_CODE (XVECEXP (PATTERN (insn
), 0, j
)) == CLOBBER
)
3602 sched_note_set (b
, XVECEXP (PATTERN (insn
), 0, j
), 1);
3605 /* This code keeps life analysis information up to date. */
3606 if (GET_CODE (insn
) == CALL_INSN
)
3608 register struct sometimes
*p
;
3610 /* A call kills all call used and global registers, except
3611 for those mentioned in the call pattern which will be
3612 made live again later. */
3613 for (i
= 0; i
< FIRST_PSEUDO_REGISTER
; i
++)
3614 if (call_used_regs
[i
] || global_regs
[i
])
3616 register int offset
= i
/ REGSET_ELT_BITS
;
3617 register REGSET_ELT_TYPE bit
3618 = (REGSET_ELT_TYPE
) 1 << (i
% REGSET_ELT_BITS
);
3620 bb_live_regs
[offset
] &= ~bit
;
3621 bb_dead_regs
[offset
] |= bit
;
3624 /* Regs live at the time of a call instruction must not
3625 go in a register clobbered by calls. Record this for
3626 all regs now live. Note that insns which are born or
3627 die in a call do not cross a call, so this must be done
3628 after the killings (above) and before the births
3630 p
= regs_sometimes_live
;
3631 for (i
= 0; i
< sometimes_max
; i
++, p
++)
3632 if (bb_live_regs
[p
->offset
]
3633 & ((REGSET_ELT_TYPE
) 1 << p
->bit
))
3634 p
->calls_crossed
+= 1;
3637 /* Make every register used live, and add REG_DEAD notes for
3638 registers which were not live before we started. */
3639 attach_deaths_insn (insn
);
3641 /* Find registers now made live by that instruction. */
3642 for (i
= 0; i
< regset_size
; i
++)
3644 REGSET_ELT_TYPE diff
= bb_live_regs
[i
] & ~old_live_regs
[i
];
3648 old_live_regs
[i
] |= diff
;
3649 for (bit
= 0; bit
< REGSET_ELT_BITS
; bit
++)
3650 if (diff
& ((REGSET_ELT_TYPE
) 1 << bit
))
3652 = new_sometimes_live (regs_sometimes_live
, i
, bit
,
3657 /* Count lengths of all regs we are worrying about now,
3658 and handle registers no longer live. */
3660 for (i
= 0; i
< sometimes_max
; i
++)
3662 register struct sometimes
*p
= ®s_sometimes_live
[i
];
3663 int regno
= p
->offset
*REGSET_ELT_BITS
+ p
->bit
;
3665 p
->live_length
+= 1;
3667 if ((bb_live_regs
[p
->offset
]
3668 & ((REGSET_ELT_TYPE
) 1 << p
->bit
)) == 0)
3670 /* This is the end of one of this register's lifetime
3671 segments. Save the lifetime info collected so far,
3672 and clear its bit in the old_live_regs entry. */
3673 sched_reg_live_length
[regno
] += p
->live_length
;
3674 sched_reg_n_calls_crossed
[regno
] += p
->calls_crossed
;
3675 old_live_regs
[p
->offset
]
3676 &= ~((REGSET_ELT_TYPE
) 1 << p
->bit
);
3678 /* Delete the reg_sometimes_live entry for this reg by
3679 copying the last entry over top of it. */
3680 *p
= regs_sometimes_live
[--sometimes_max
];
3681 /* ...and decrement i so that this newly copied entry
3682 will be processed. */
3688 insn
= PREV_INSN (insn
);
3690 while (SCHED_GROUP_P (link
));
3692 /* Set INSN back to the insn we are scheduling now. */
3696 /* Schedule INSN. Remove it from the ready list. */
3701 NEXT_INSN (insn
) = last
;
3702 PREV_INSN (last
) = insn
;
3705 /* Everything that precedes INSN now either becomes "ready", if
3706 it can execute immediately before INSN, or "pending", if
3707 there must be a delay. Give INSN high enough priority that
3708 at least one (maybe more) reg-killing insns can be launched
3709 ahead of all others. Mark INSN as scheduled by changing its
3711 INSN_PRIORITY (insn
) = LAUNCH_PRIORITY
;
3712 new_ready
= schedule_insn (insn
, ready
, n_ready
, clock
);
3713 INSN_PRIORITY (insn
) = DONE_PRIORITY
;
3715 /* Schedule all prior insns that must not be moved. */
3716 if (SCHED_GROUP_P (insn
))
3718 /* Disable these insns from being launched. */
3720 while (SCHED_GROUP_P (link
))
3722 /* Disable these insns from being launched by anybody. */
3723 link
= PREV_INSN (link
);
3724 INSN_REF_COUNT (link
) = 0;
3727 /* None of these insns can move forward into delay slots. */
3728 while (SCHED_GROUP_P (insn
))
3730 insn
= PREV_INSN (insn
);
3731 new_ready
= schedule_insn (insn
, ready
, new_ready
, clock
);
3732 INSN_PRIORITY (insn
) = DONE_PRIORITY
;
3735 NEXT_INSN (insn
) = last
;
3736 PREV_INSN (last
) = insn
;
3744 if (reload_completed
== 0)
3745 finish_sometimes_live (regs_sometimes_live
, sometimes_max
);
3747 /* HEAD is now the first insn in the chain of insns that
3748 been scheduled by the loop above.
3749 TAIL is the last of those insns. */
3752 /* NOTE_LIST is the end of a chain of notes previously found
3753 among the insns. Insert them at the beginning of the insns. */
3756 rtx note_head
= note_list
;
3757 while (PREV_INSN (note_head
))
3758 note_head
= PREV_INSN (note_head
);
3760 PREV_INSN (head
) = note_list
;
3761 NEXT_INSN (note_list
) = head
;
3765 /* There should be no REG_DEAD notes leftover at the end.
3766 In practice, this can occur as the result of bugs in flow, combine.c,
3767 and/or sched.c. The values of the REG_DEAD notes remaining are
3768 meaningless, because dead_notes is just used as a free list. */
3770 if (dead_notes
!= 0)
3774 if (new_needs
& NEED_HEAD
)
3775 basic_block_head
[b
] = head
;
3776 PREV_INSN (head
) = prev_head
;
3777 NEXT_INSN (prev_head
) = head
;
3779 if (new_needs
& NEED_TAIL
)
3780 basic_block_end
[b
] = tail
;
3781 NEXT_INSN (tail
) = next_tail
;
3782 PREV_INSN (next_tail
) = tail
;
3784 /* Restore the line-number notes of each insn. */
3785 if (write_symbols
!= NO_DEBUG
)
3787 rtx line
, note
, prev
, new;
3790 head
= basic_block_head
[b
];
3791 next_tail
= NEXT_INSN (basic_block_end
[b
]);
3793 /* Determine the current line-number. We want to know the current
3794 line number of the first insn of the block here, in case it is
3795 different from the true line number that was saved earlier. If
3796 different, then we need a line number note before the first insn
3797 of this block. If it happens to be the same, then we don't want to
3798 emit another line number note here. */
3799 for (line
= head
; line
; line
= PREV_INSN (line
))
3800 if (GET_CODE (line
) == NOTE
&& NOTE_LINE_NUMBER (line
) > 0)
3803 /* Walk the insns keeping track of the current line-number and inserting
3804 the line-number notes as needed. */
3805 for (insn
= head
; insn
!= next_tail
; insn
= NEXT_INSN (insn
))
3806 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
3808 /* This used to emit line number notes before every non-deleted note.
3809 However, this confuses a debugger, because line notes not separated
3810 by real instructions all end up at the same address. I can find no
3811 use for line number notes before other notes, so none are emitted. */
3812 else if (GET_CODE (insn
) != NOTE
3813 && (note
= LINE_NOTE (insn
)) != 0
3816 || NOTE_LINE_NUMBER (note
) != NOTE_LINE_NUMBER (line
)
3817 || NOTE_SOURCE_FILE (note
) != NOTE_SOURCE_FILE (line
)))
3820 prev
= PREV_INSN (insn
);
3821 if (LINE_NOTE (note
))
3823 /* Re-use the original line-number note. */
3824 LINE_NOTE (note
) = 0;
3825 PREV_INSN (note
) = prev
;
3826 NEXT_INSN (prev
) = note
;
3827 PREV_INSN (insn
) = note
;
3828 NEXT_INSN (note
) = insn
;
3833 new = emit_note_after (NOTE_LINE_NUMBER (note
), prev
);
3834 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note
);
3838 fprintf (file
, ";; added %d line-number notes\n", notes
);
3843 fprintf (file
, ";; total time = %d\n;; new basic block head = %d\n;; new basic block end = %d\n\n",
3844 clock
, INSN_UID (basic_block_head
[b
]), INSN_UID (basic_block_end
[b
]));
3847 /* Yow! We're done! */
3848 free_pending_lists ();
3853 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
3854 REGNO, returning the rtx of the reference found if any. Otherwise,
3858 regno_use_in (regno
, x
)
3866 if (GET_CODE (x
) == REG
&& REGNO (x
) == regno
)
3869 fmt
= GET_RTX_FORMAT (GET_CODE (x
));
3870 for (i
= GET_RTX_LENGTH (GET_CODE (x
)) - 1; i
>= 0; i
--)
3874 if (tem
= regno_use_in (regno
, XEXP (x
, i
)))
3877 else if (fmt
[i
] == 'E')
3878 for (j
= XVECLEN (x
, i
) - 1; j
>= 0; j
--)
3879 if (tem
= regno_use_in (regno
, XVECEXP (x
, i
, j
)))
3886 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
3887 needed for the hard register mentioned in the note. This can happen
3888 if the reference to the hard register in the original insn was split into
3889 several smaller hard register references in the split insns. */
3892 split_hard_reg_notes (note
, first
, last
, orig_insn
)
3893 rtx note
, first
, last
, orig_insn
;
3895 rtx reg
, temp
, link
;
3896 int n_regs
, i
, new_reg
;
3899 /* Assume that this is a REG_DEAD note. */
3900 if (REG_NOTE_KIND (note
) != REG_DEAD
)
3903 reg
= XEXP (note
, 0);
3905 n_regs
= HARD_REGNO_NREGS (REGNO (reg
), GET_MODE (reg
));
3907 for (i
= 0; i
< n_regs
; i
++)
3909 new_reg
= REGNO (reg
) + i
;
3911 /* Check for references to new_reg in the split insns. */
3912 for (insn
= last
; ; insn
= PREV_INSN (insn
))
3914 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
3915 && (temp
= regno_use_in (new_reg
, PATTERN (insn
))))
3917 /* Create a new reg dead note here. */
3918 link
= rtx_alloc (EXPR_LIST
);
3919 PUT_REG_NOTE_KIND (link
, REG_DEAD
);
3920 XEXP (link
, 0) = temp
;
3921 XEXP (link
, 1) = REG_NOTES (insn
);
3922 REG_NOTES (insn
) = link
;
3924 /* If killed multiple registers here, then add in the excess. */
3925 i
+= HARD_REGNO_NREGS (REGNO (temp
), GET_MODE (temp
)) - 1;
3929 /* It isn't mentioned anywhere, so no new reg note is needed for
3937 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
3938 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
3941 new_insn_dead_notes (pat
, insn
, last
, orig_insn
)
3942 rtx pat
, insn
, last
, orig_insn
;
3946 /* PAT is either a CLOBBER or a SET here. */
3947 dest
= XEXP (pat
, 0);
3949 while (GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SUBREG
3950 || GET_CODE (dest
) == STRICT_LOW_PART
3951 || GET_CODE (dest
) == SIGN_EXTRACT
)
3952 dest
= XEXP (dest
, 0);
3954 if (GET_CODE (dest
) == REG
)
3956 for (tem
= last
; tem
!= insn
; tem
= PREV_INSN (tem
))
3958 if (GET_RTX_CLASS (GET_CODE (tem
)) == 'i'
3959 && reg_overlap_mentioned_p (dest
, PATTERN (tem
))
3960 && (set
= single_set (tem
)))
3962 rtx tem_dest
= SET_DEST (set
);
3964 while (GET_CODE (tem_dest
) == ZERO_EXTRACT
3965 || GET_CODE (tem_dest
) == SUBREG
3966 || GET_CODE (tem_dest
) == STRICT_LOW_PART
3967 || GET_CODE (tem_dest
) == SIGN_EXTRACT
)
3968 tem_dest
= XEXP (tem_dest
, 0);
3970 if (tem_dest
!= dest
)
3972 /* Use the same scheme as combine.c, don't put both REG_DEAD
3973 and REG_UNUSED notes on the same insn. */
3974 if (! find_regno_note (tem
, REG_UNUSED
, REGNO (dest
))
3975 && ! find_regno_note (tem
, REG_DEAD
, REGNO (dest
)))
3977 rtx note
= rtx_alloc (EXPR_LIST
);
3978 PUT_REG_NOTE_KIND (note
, REG_DEAD
);
3979 XEXP (note
, 0) = dest
;
3980 XEXP (note
, 1) = REG_NOTES (tem
);
3981 REG_NOTES (tem
) = note
;
3983 /* The reg only dies in one insn, the last one that uses
3987 else if (reg_overlap_mentioned_p (dest
, SET_SRC (set
)))
3988 /* We found an instruction that both uses the register,
3989 and sets it, so no new REG_NOTE is needed for this set. */
3993 /* If this is a set, it must die somewhere, unless it is the dest of
3994 the original insn, and hence is live after the original insn. Abort
3995 if it isn't supposed to be live after the original insn.
3997 If this is a clobber, then just add a REG_UNUSED note. */
4000 int live_after_orig_insn
= 0;
4001 rtx pattern
= PATTERN (orig_insn
);
4004 if (GET_CODE (pat
) == CLOBBER
)
4006 rtx note
= rtx_alloc (EXPR_LIST
);
4007 PUT_REG_NOTE_KIND (note
, REG_UNUSED
);
4008 XEXP (note
, 0) = dest
;
4009 XEXP (note
, 1) = REG_NOTES (insn
);
4010 REG_NOTES (insn
) = note
;
4014 /* The original insn could have multiple sets, so search the
4015 insn for all sets. */
4016 if (GET_CODE (pattern
) == SET
)
4018 if (reg_overlap_mentioned_p (dest
, SET_DEST (pattern
)))
4019 live_after_orig_insn
= 1;
4021 else if (GET_CODE (pattern
) == PARALLEL
)
4023 for (i
= 0; i
< XVECLEN (pattern
, 0); i
++)
4024 if (GET_CODE (XVECEXP (pattern
, 0, i
)) == SET
4025 && reg_overlap_mentioned_p (dest
,
4026 SET_DEST (XVECEXP (pattern
,
4028 live_after_orig_insn
= 1;
4031 if (! live_after_orig_insn
)
4037 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
4038 registers modified by X. INC is -1 if the containing insn is being deleted,
4039 and is 1 if the containing insn is a newly generated insn. */
4042 update_n_sets (x
, inc
)
4046 rtx dest
= SET_DEST (x
);
4048 while (GET_CODE (dest
) == STRICT_LOW_PART
|| GET_CODE (dest
) == SUBREG
4049 || GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SIGN_EXTRACT
)
4050 dest
= SUBREG_REG (dest
);
4052 if (GET_CODE (dest
) == REG
)
4054 int regno
= REGNO (dest
);
4056 if (regno
< FIRST_PSEUDO_REGISTER
)
4059 int endregno
= regno
+ HARD_REGNO_NREGS (regno
, GET_MODE (dest
));
4061 for (i
= regno
; i
< endregno
; i
++)
4062 reg_n_sets
[i
] += inc
;
4065 reg_n_sets
[regno
] += inc
;
4069 /* Updates all flow-analysis related quantities (including REG_NOTES) for
4070 the insns from FIRST to LAST inclusive that were created by splitting
4071 ORIG_INSN. NOTES are the original REG_NOTES. */
4074 update_flow_info (notes
, first
, last
, orig_insn
)
4081 rtx orig_dest
, temp
;
4084 /* Get and save the destination set by the original insn. */
4086 orig_dest
= single_set (orig_insn
);
4088 orig_dest
= SET_DEST (orig_dest
);
4090 /* Move REG_NOTES from the original insn to where they now belong. */
4092 for (note
= notes
; note
; note
= next
)
4094 next
= XEXP (note
, 1);
4095 switch (REG_NOTE_KIND (note
))
4099 /* Move these notes from the original insn to the last new insn where
4100 the register is now set. */
4102 for (insn
= last
; ; insn
= PREV_INSN (insn
))
4104 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
4105 && reg_mentioned_p (XEXP (note
, 0), PATTERN (insn
)))
4107 /* If this note refers to a multiple word hard register, it
4108 may have been split into several smaller hard register
4109 references, so handle it specially. */
4110 temp
= XEXP (note
, 0);
4111 if (REG_NOTE_KIND (note
) == REG_DEAD
4112 && GET_CODE (temp
) == REG
4113 && REGNO (temp
) < FIRST_PSEUDO_REGISTER
4114 && HARD_REGNO_NREGS (REGNO (temp
), GET_MODE (temp
)) > 1)
4115 split_hard_reg_notes (note
, first
, last
, orig_insn
);
4118 XEXP (note
, 1) = REG_NOTES (insn
);
4119 REG_NOTES (insn
) = note
;
4122 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
4124 /* ??? This won't handle multiple word registers correctly,
4125 but should be good enough for now. */
4126 if (REG_NOTE_KIND (note
) == REG_UNUSED
4127 && ! dead_or_set_p (insn
, XEXP (note
, 0)))
4128 PUT_REG_NOTE_KIND (note
, REG_DEAD
);
4130 /* The reg only dies in one insn, the last one that uses
4134 /* It must die somewhere, fail it we couldn't find where it died.
4136 If this is a REG_UNUSED note, then it must be a temporary
4137 register that was not needed by this instantiation of the
4138 pattern, so we can safely ignore it. */
4141 if (REG_NOTE_KIND (note
) != REG_UNUSED
)
4150 /* This note applies to the dest of the original insn. Find the
4151 first new insn that now has the same dest, and move the note
4157 for (insn
= first
; ; insn
= NEXT_INSN (insn
))
4159 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
4160 && (temp
= single_set (insn
))
4161 && rtx_equal_p (SET_DEST (temp
), orig_dest
))
4163 XEXP (note
, 1) = REG_NOTES (insn
);
4164 REG_NOTES (insn
) = note
;
4165 /* The reg is only zero before one insn, the first that
4169 /* It must be set somewhere, fail if we couldn't find where it
4178 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
4179 set is meaningless. Just drop the note. */
4183 case REG_NO_CONFLICT
:
4184 /* These notes apply to the dest of the original insn. Find the last
4185 new insn that now has the same dest, and move the note there. */
4190 for (insn
= last
; ; insn
= PREV_INSN (insn
))
4192 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
4193 && (temp
= single_set (insn
))
4194 && rtx_equal_p (SET_DEST (temp
), orig_dest
))
4196 XEXP (note
, 1) = REG_NOTES (insn
);
4197 REG_NOTES (insn
) = note
;
4198 /* Only put this note on one of the new insns. */
4202 /* The original dest must still be set someplace. Abort if we
4203 couldn't find it. */
4210 /* Move a REG_LIBCALL note to the first insn created, and update
4211 the corresponding REG_RETVAL note. */
4212 XEXP (note
, 1) = REG_NOTES (first
);
4213 REG_NOTES (first
) = note
;
4215 insn
= XEXP (note
, 0);
4216 note
= find_reg_note (insn
, REG_RETVAL
, NULL_RTX
);
4218 XEXP (note
, 0) = first
;
4222 /* Move a REG_RETVAL note to the last insn created, and update
4223 the corresponding REG_LIBCALL note. */
4224 XEXP (note
, 1) = REG_NOTES (last
);
4225 REG_NOTES (last
) = note
;
4227 insn
= XEXP (note
, 0);
4228 note
= find_reg_note (insn
, REG_LIBCALL
, NULL_RTX
);
4230 XEXP (note
, 0) = last
;
4234 /* This should be moved to whichever instruction is a JUMP_INSN. */
4236 for (insn
= last
; ; insn
= PREV_INSN (insn
))
4238 if (GET_CODE (insn
) == JUMP_INSN
)
4240 XEXP (note
, 1) = REG_NOTES (insn
);
4241 REG_NOTES (insn
) = note
;
4242 /* Only put this note on one of the new insns. */
4245 /* Fail if we couldn't find a JUMP_INSN. */
4252 /* This should be moved to whichever instruction now has the
4253 increment operation. */
4257 /* Should be moved to the new insn(s) which use the label. */
4258 for (insn
= first
; insn
!= NEXT_INSN (last
); insn
= NEXT_INSN (insn
))
4259 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
4260 && reg_mentioned_p (XEXP (note
, 0), PATTERN (insn
)))
4261 REG_NOTES (insn
) = gen_rtx (EXPR_LIST
, REG_LABEL
,
4262 XEXP (note
, 0), REG_NOTES (insn
));
4267 /* These two notes will never appear until after reorg, so we don't
4268 have to handle them here. */
4274 /* Each new insn created, except the last, has a new set. If the destination
4275 is a register, then this reg is now live across several insns, whereas
4276 previously the dest reg was born and died within the same insn. To
4277 reflect this, we now need a REG_DEAD note on the insn where this
4280 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
4282 for (insn
= first
; insn
!= last
; insn
= NEXT_INSN (insn
))
4287 pat
= PATTERN (insn
);
4288 if (GET_CODE (pat
) == SET
|| GET_CODE (pat
) == CLOBBER
)
4289 new_insn_dead_notes (pat
, insn
, last
, orig_insn
);
4290 else if (GET_CODE (pat
) == PARALLEL
)
4292 for (i
= 0; i
< XVECLEN (pat
, 0); i
++)
4293 if (GET_CODE (XVECEXP (pat
, 0, i
)) == SET
4294 || GET_CODE (XVECEXP (pat
, 0, i
)) == CLOBBER
)
4295 new_insn_dead_notes (XVECEXP (pat
, 0, i
), insn
, last
, orig_insn
);
4299 /* If any insn, except the last, uses the register set by the last insn,
4300 then we need a new REG_DEAD note on that insn. In this case, there
4301 would not have been a REG_DEAD note for this register in the original
4302 insn because it was used and set within one insn.
4304 There is no new REG_DEAD note needed if the last insn uses the register
4305 that it is setting. */
4307 set
= single_set (last
);
4310 rtx dest
= SET_DEST (set
);
4312 while (GET_CODE (dest
) == ZERO_EXTRACT
|| GET_CODE (dest
) == SUBREG
4313 || GET_CODE (dest
) == STRICT_LOW_PART
4314 || GET_CODE (dest
) == SIGN_EXTRACT
)
4315 dest
= XEXP (dest
, 0);
4317 if (GET_CODE (dest
) == REG
4318 && ! reg_overlap_mentioned_p (dest
, SET_SRC (set
)))
4320 for (insn
= PREV_INSN (last
); ; insn
= PREV_INSN (insn
))
4322 if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
4323 && reg_mentioned_p (dest
, PATTERN (insn
))
4324 && (set
= single_set (insn
)))
4326 rtx insn_dest
= SET_DEST (set
);
4328 while (GET_CODE (insn_dest
) == ZERO_EXTRACT
4329 || GET_CODE (insn_dest
) == SUBREG
4330 || GET_CODE (insn_dest
) == STRICT_LOW_PART
4331 || GET_CODE (insn_dest
) == SIGN_EXTRACT
)
4332 insn_dest
= XEXP (insn_dest
, 0);
4334 if (insn_dest
!= dest
)
4336 note
= rtx_alloc (EXPR_LIST
);
4337 PUT_REG_NOTE_KIND (note
, REG_DEAD
);
4338 XEXP (note
, 0) = dest
;
4339 XEXP (note
, 1) = REG_NOTES (insn
);
4340 REG_NOTES (insn
) = note
;
4341 /* The reg only dies in one insn, the last one
4352 /* If the original dest is modifying a multiple register target, and the
4353 original instruction was split such that the original dest is now set
4354 by two or more SUBREG sets, then the split insns no longer kill the
4355 destination of the original insn.
4357 In this case, if there exists an instruction in the same basic block,
4358 before the split insn, which uses the original dest, and this use is
4359 killed by the original insn, then we must remove the REG_DEAD note on
4360 this insn, because it is now superfluous.
4362 This does not apply when a hard register gets split, because the code
4363 knows how to handle overlapping hard registers properly. */
4364 if (orig_dest
&& GET_CODE (orig_dest
) == REG
)
4366 int found_orig_dest
= 0;
4367 int found_split_dest
= 0;
4369 for (insn
= first
; ; insn
= NEXT_INSN (insn
))
4371 set
= single_set (insn
);
4374 if (GET_CODE (SET_DEST (set
)) == REG
4375 && REGNO (SET_DEST (set
)) == REGNO (orig_dest
))
4377 found_orig_dest
= 1;
4380 else if (GET_CODE (SET_DEST (set
)) == SUBREG
4381 && SUBREG_REG (SET_DEST (set
)) == orig_dest
)
4383 found_split_dest
= 1;
4392 if (found_split_dest
)
4394 /* Search backwards from FIRST, looking for the first insn that uses
4395 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
4396 If we find an insn, and it has a REG_DEAD note, then delete the
4399 for (insn
= first
; insn
; insn
= PREV_INSN (insn
))
4401 if (GET_CODE (insn
) == CODE_LABEL
4402 || GET_CODE (insn
) == JUMP_INSN
)
4404 else if (GET_RTX_CLASS (GET_CODE (insn
)) == 'i'
4405 && reg_mentioned_p (orig_dest
, insn
))
4407 note
= find_regno_note (insn
, REG_DEAD
, REGNO (orig_dest
));
4409 remove_note (insn
, note
);
4413 else if (! found_orig_dest
)
4415 /* This should never happen. */
4420 /* Update reg_n_sets. This is necessary to prevent local alloc from
4421 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
4422 a reg from set once to set multiple times. */
4425 rtx x
= PATTERN (orig_insn
);
4426 RTX_CODE code
= GET_CODE (x
);
4428 if (code
== SET
|| code
== CLOBBER
)
4429 update_n_sets (x
, -1);
4430 else if (code
== PARALLEL
)
4433 for (i
= XVECLEN (x
, 0) - 1; i
>= 0; i
--)
4435 code
= GET_CODE (XVECEXP (x
, 0, i
));
4436 if (code
== SET
|| code
== CLOBBER
)
4437 update_n_sets (XVECEXP (x
, 0, i
), -1);
4441 for (insn
= first
; ; insn
= NEXT_INSN (insn
))
4444 code
= GET_CODE (x
);
4446 if (code
== SET
|| code
== CLOBBER
)
4447 update_n_sets (x
, 1);
4448 else if (code
== PARALLEL
)
4451 for (i
= XVECLEN (x
, 0) - 1; i
>= 0; i
--)
4453 code
= GET_CODE (XVECEXP (x
, 0, i
));
4454 if (code
== SET
|| code
== CLOBBER
)
4455 update_n_sets (XVECEXP (x
, 0, i
), 1);
4465 /* The one entry point in this file. DUMP_FILE is the dump file for
4469 schedule_insns (dump_file
)
4472 int max_uid
= MAX_INSNS_PER_SPLIT
* (get_max_uid () + 1);
4476 /* Taking care of this degenerate case makes the rest of
4477 this code simpler. */
4478 if (n_basic_blocks
== 0)
4481 /* Create an insn here so that we can hang dependencies off of it later. */
4482 sched_before_next_call
4483 = gen_rtx (INSN
, VOIDmode
, 0, NULL_RTX
, NULL_RTX
,
4484 NULL_RTX
, 0, NULL_RTX
, 0);
4486 /* Initialize the unused_*_lists. We can't use the ones left over from
4487 the previous function, because gcc has freed that memory. We can use
4488 the ones left over from the first sched pass in the second pass however,
4489 so only clear them on the first sched pass. The first pass is before
4490 reload if flag_schedule_insns is set, otherwise it is afterwards. */
4492 if (reload_completed
== 0 || ! flag_schedule_insns
)
4494 unused_insn_list
= 0;
4495 unused_expr_list
= 0;
4498 /* We create no insns here, only reorder them, so we
4499 remember how far we can cut back the stack on exit. */
4501 /* Allocate data for this pass. See comments, above,
4502 for what these vectors do. */
4503 insn_luid
= (int *) alloca (max_uid
* sizeof (int));
4504 insn_priority
= (int *) alloca (max_uid
* sizeof (int));
4505 insn_tick
= (int *) alloca (max_uid
* sizeof (int));
4506 insn_costs
= (short *) alloca (max_uid
* sizeof (short));
4507 insn_units
= (short *) alloca (max_uid
* sizeof (short));
4508 insn_blockage
= (unsigned int *) alloca (max_uid
* sizeof (unsigned int));
4509 insn_ref_count
= (int *) alloca (max_uid
* sizeof (int));
4511 if (reload_completed
== 0)
4513 sched_reg_n_deaths
= (short *) alloca (max_regno
* sizeof (short));
4514 sched_reg_n_calls_crossed
= (int *) alloca (max_regno
* sizeof (int));
4515 sched_reg_live_length
= (int *) alloca (max_regno
* sizeof (int));
4516 bb_dead_regs
= (regset
) alloca (regset_bytes
);
4517 bb_live_regs
= (regset
) alloca (regset_bytes
);
4518 bzero (sched_reg_n_calls_crossed
, max_regno
* sizeof (int));
4519 bzero (sched_reg_live_length
, max_regno
* sizeof (int));
4520 bcopy (reg_n_deaths
, sched_reg_n_deaths
, max_regno
* sizeof (short));
4521 init_alias_analysis ();
4525 sched_reg_n_deaths
= 0;
4526 sched_reg_n_calls_crossed
= 0;
4527 sched_reg_live_length
= 0;
4530 if (! flag_schedule_insns
)
4531 init_alias_analysis ();
4534 if (write_symbols
!= NO_DEBUG
)
4538 line_note
= (rtx
*) alloca (max_uid
* sizeof (rtx
));
4539 bzero (line_note
, max_uid
* sizeof (rtx
));
4540 line_note_head
= (rtx
*) alloca (n_basic_blocks
* sizeof (rtx
));
4541 bzero (line_note_head
, n_basic_blocks
* sizeof (rtx
));
4543 /* Determine the line-number at the start of each basic block.
4544 This must be computed and saved now, because after a basic block's
4545 predecessor has been scheduled, it is impossible to accurately
4546 determine the correct line number for the first insn of the block. */
4548 for (b
= 0; b
< n_basic_blocks
; b
++)
4549 for (line
= basic_block_head
[b
]; line
; line
= PREV_INSN (line
))
4550 if (GET_CODE (line
) == NOTE
&& NOTE_LINE_NUMBER (line
) > 0)
4552 line_note_head
[b
] = line
;
4557 bzero (insn_luid
, max_uid
* sizeof (int));
4558 bzero (insn_priority
, max_uid
* sizeof (int));
4559 bzero (insn_tick
, max_uid
* sizeof (int));
4560 bzero (insn_costs
, max_uid
* sizeof (short));
4561 bzero (insn_units
, max_uid
* sizeof (short));
4562 bzero (insn_blockage
, max_uid
* sizeof (unsigned int));
4563 bzero (insn_ref_count
, max_uid
* sizeof (int));
4565 /* Schedule each basic block, block by block. */
4567 /* ??? Add a NOTE after the last insn of the last basic block. It is not
4568 known why this is done. */
4570 insn
= basic_block_end
[n_basic_blocks
-1];
4571 if (NEXT_INSN (insn
) == 0
4572 || (GET_CODE (insn
) != NOTE
4573 && GET_CODE (insn
) != CODE_LABEL
4574 /* Don't emit a NOTE if it would end up between an unconditional
4575 jump and a BARRIER. */
4576 && ! (GET_CODE (insn
) == JUMP_INSN
4577 && GET_CODE (NEXT_INSN (insn
)) == BARRIER
)))
4578 emit_note_after (NOTE_INSN_DELETED
, basic_block_end
[n_basic_blocks
-1]);
4580 for (b
= 0; b
< n_basic_blocks
; b
++)
4587 for (insn
= basic_block_head
[b
]; ; insn
= next
)
4592 /* Can't use `next_real_insn' because that
4593 might go across CODE_LABELS and short-out basic blocks. */
4594 next
= NEXT_INSN (insn
);
4595 if (GET_CODE (insn
) != INSN
)
4597 if (insn
== basic_block_end
[b
])
4603 /* Don't split no-op move insns. These should silently disappear
4604 later in final. Splitting such insns would break the code
4605 that handles REG_NO_CONFLICT blocks. */
4606 set
= single_set (insn
);
4607 if (set
&& rtx_equal_p (SET_SRC (set
), SET_DEST (set
)))
4609 if (insn
== basic_block_end
[b
])
4612 /* Nops get in the way while scheduling, so delete them now if
4613 register allocation has already been done. It is too risky
4614 to try to do this before register allocation, and there are
4615 unlikely to be very many nops then anyways. */
4616 if (reload_completed
)
4618 PUT_CODE (insn
, NOTE
);
4619 NOTE_LINE_NUMBER (insn
) = NOTE_INSN_DELETED
;
4620 NOTE_SOURCE_FILE (insn
) = 0;
4626 /* Split insns here to get max fine-grain parallelism. */
4627 prev
= PREV_INSN (insn
);
4628 if (reload_completed
== 0)
4630 rtx last
, first
= PREV_INSN (insn
);
4631 rtx notes
= REG_NOTES (insn
);
4633 last
= try_split (PATTERN (insn
), insn
, 1);
4636 /* try_split returns the NOTE that INSN became. */
4637 first
= NEXT_INSN (first
);
4638 update_flow_info (notes
, first
, last
, insn
);
4640 PUT_CODE (insn
, NOTE
);
4641 NOTE_SOURCE_FILE (insn
) = 0;
4642 NOTE_LINE_NUMBER (insn
) = NOTE_INSN_DELETED
;
4643 if (insn
== basic_block_head
[b
])
4644 basic_block_head
[b
] = first
;
4645 if (insn
== basic_block_end
[b
])
4647 basic_block_end
[b
] = last
;
4653 if (insn
== basic_block_end
[b
])
4657 schedule_block (b
, dump_file
);
4664 /* Reposition the prologue and epilogue notes in case we moved the
4665 prologue/epilogue insns. */
4666 if (reload_completed
)
4667 reposition_prologue_and_epilogue_notes (get_insns ());
4669 if (write_symbols
!= NO_DEBUG
)
4672 rtx insn
= get_insns ();
4673 int active_insn
= 0;
4676 /* Walk the insns deleting redundant line-number notes. Many of these
4677 are already present. The remainder tend to occur at basic
4678 block boundaries. */
4679 for (insn
= get_last_insn (); insn
; insn
= PREV_INSN (insn
))
4680 if (GET_CODE (insn
) == NOTE
&& NOTE_LINE_NUMBER (insn
) > 0)
4682 /* If there are no active insns following, INSN is redundant. */
4683 if (active_insn
== 0)
4686 NOTE_SOURCE_FILE (insn
) = 0;
4687 NOTE_LINE_NUMBER (insn
) = NOTE_INSN_DELETED
;
4689 /* If the line number is unchanged, LINE is redundant. */
4691 && NOTE_LINE_NUMBER (line
) == NOTE_LINE_NUMBER (insn
)
4692 && NOTE_SOURCE_FILE (line
) == NOTE_SOURCE_FILE (insn
))
4695 NOTE_SOURCE_FILE (line
) = 0;
4696 NOTE_LINE_NUMBER (line
) = NOTE_INSN_DELETED
;
4703 else if (! ((GET_CODE (insn
) == NOTE
4704 && NOTE_LINE_NUMBER (insn
) == NOTE_INSN_DELETED
)
4705 || (GET_CODE (insn
) == INSN
4706 && (GET_CODE (PATTERN (insn
)) == USE
4707 || GET_CODE (PATTERN (insn
)) == CLOBBER
))))
4710 if (dump_file
&& notes
)
4711 fprintf (dump_file
, ";; deleted %d line-number notes\n", notes
);
4714 if (reload_completed
== 0)
4717 for (regno
= 0; regno
< max_regno
; regno
++)
4718 if (sched_reg_live_length
[regno
])
4722 if (reg_live_length
[regno
] > sched_reg_live_length
[regno
])
4724 ";; register %d life shortened from %d to %d\n",
4725 regno
, reg_live_length
[regno
],
4726 sched_reg_live_length
[regno
]);
4727 /* Negative values are special; don't overwrite the current
4728 reg_live_length value if it is negative. */
4729 else if (reg_live_length
[regno
] < sched_reg_live_length
[regno
]
4730 && reg_live_length
[regno
] >= 0)
4732 ";; register %d life extended from %d to %d\n",
4733 regno
, reg_live_length
[regno
],
4734 sched_reg_live_length
[regno
]);
4736 if (! reg_n_calls_crossed
[regno
]
4737 && sched_reg_n_calls_crossed
[regno
])
4739 ";; register %d now crosses calls\n", regno
);
4740 else if (reg_n_calls_crossed
[regno
]
4741 && ! sched_reg_n_calls_crossed
[regno
]
4742 && reg_basic_block
[regno
] != REG_BLOCK_GLOBAL
)
4744 ";; register %d no longer crosses calls\n", regno
);
4747 /* Negative values are special; don't overwrite the current
4748 reg_live_length value if it is negative. */
4749 if (reg_live_length
[regno
] >= 0)
4750 reg_live_length
[regno
] = sched_reg_live_length
[regno
];
4752 /* We can't change the value of reg_n_calls_crossed to zero for
4753 pseudos which are live in more than one block.
4755 This is because combine might have made an optimization which
4756 invalidated basic_block_live_at_start and reg_n_calls_crossed,
4757 but it does not update them. If we update reg_n_calls_crossed
4758 here, the two variables are now inconsistent, and this might
4759 confuse the caller-save code into saving a register that doesn't
4760 need to be saved. This is only a problem when we zero calls
4761 crossed for a pseudo live in multiple basic blocks.
4763 Alternatively, we could try to correctly update basic block live
4764 at start here in sched, but that seems complicated. */
4765 if (sched_reg_n_calls_crossed
[regno
]
4766 || reg_basic_block
[regno
] != REG_BLOCK_GLOBAL
)
4767 reg_n_calls_crossed
[regno
] = sched_reg_n_calls_crossed
[regno
];
4771 #endif /* INSN_SCHEDULING */