This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Review of the 3rd and last piece of the machine-independent part of selective scheduler


This is 3rd and last piece of review of machine-independent part of
selective scheduler.

Code and algorithms of selective scheduler in sel-sched.c is very well
documented but there are a lot of functions whose parameters are not
documented.  Sometimes sense of the parameters could be easy
understood from function descriptions of their code but it is better
to document them explicitly.

Here is what I found to worth to fix:


>diff -cprNd -x .svn -x .hg trunk/gcc/sel-sched.c sel-sched-branch/gcc/sel-sched.c
>*** trunk/gcc/sel-sched.c Thu Jan 1 03:00:00 1970
>--- sel-sched-branch/gcc/sel-sched.c Thu May 29 18:28:30 2008
>***************
>*** 0 ****
>--- 1,7364 ----
...


>+
>+ /* This descibes the data given to sel_sched_region_2. */
>+ struct sel_sched_region_2_data_def
>+ { >+ int orig_max_seqno;
>+ int highest_seqno_in_use;


Comments, please, for the structure members.

>+ };

...

>+ /* This vector has the exprs which may still present in av_sets, but actually
>+ can't be moved up due to bookkeping created during code motion to another
>+ fence. See comment near the call to update_and_record_unavailable_insns
>+ for the detailed explanations. */
>+ DEF_VEC_P(vinsn_t);
>+ DEF_VEC_ALLOC_P(vinsn_t,heap);
>+ typedef VEC(vinsn_t, heap) *vinsn_vec_t;
>+ static vinsn_vec_t vec_bookkeeping_blocked_vinsns = NULL;
>+ static vinsn_vec_t vec_target_unavailable_vinsns = NULL;


Comments for vec_target_unavailable_vinsns, please.

>+
>+
>+ /* Variables to accumulate different statistics.  */
>+ static int stat_bookkeeping_copies;
>+ static int stat_insns_needed_bookkeeping;
>+ static int stat_renamed_scheduled;
>+ static int stat_substitutions_total;

Please provide for details for the variables.



...

>+ /* Construct successor fences from FENCE and put them in NEW_FENCES.
                                     ^
I think it is a typo.  It should OLD_FENCES.
Description of parameter `data' is absent.

>+    When a successor will continue a ebb, transfer all FENCE's parameters
>+    to the new fence.  */
>+ static void
>+ extract_new_fences_from (flist_t old_fences, flist_tail_t new_fences,
>+              sel_sched_region_2_data_t data)
>+ {
...

>+ /* Substitute all occurences of INSN's destination in EXPR' vinsn with INSN's
>+ source (if INSN is eligible for substitution). Returns TRUE if
>+ substitution was actually performed, FALSE otherwise. Substitution might
>+ be not performed because it's either EXPR' vinsn doesn't contain INSN's
>+ destination or the resulting insn is invalid for the target machine. */


Description of parameter `undo' is absent.

>+ static bool
>+ substitute_reg_in_expr (expr_t expr, insn_t insn, bool undo)
>+ {
...

>+
>+ /* Returns whether VI writes one of the REGS.  */

                                          ^
I think it should be USED_REGS.
Description of parameter `unavailable_hard_regs' is absent.

>+ static bool
>+ vinsn_writes_one_of_regs_p (vinsn_t vi, regset used_regs,
>+                             HARD_REG_SET unavailable_hard_regs)
>+ {
...

>+
>+ #if 0
>+ /* True when expressions of MODE are considered for renaming.  */
>+ static inline bool
>+ mode_ok_for_rename_p (enum machine_mode mode)
>+ {
>+   enum mode_class class = GET_MODE_CLASS (mode);
>+
>+   return class == MODE_INT || class == MODE_FLOAT;
>+ }
>+ #endif

Please, remove the above code.

...

>+
>+ /* reg_rename_tick[REG1] > reg_rename_tick[REG2] if REG1 was chosen as the
>+ best register more recently than REG2. */
>+ static int reg_rename_tick[FIRST_PSEUDO_REGISTER];
>+ static int reg_rename_this_tick;


Please, document `reg_rename_this_tick'.

>+
>+ /* Choose the register among free, that is suitable for storing
>+ the rhs value.
>+
>+ ORIGINAL_INSNS is the list of insns where the operation (rhs)
>+ originally appears. There could be multiple original operations
>+ for single rhs since we moving it up and merging along different
>+ paths.
>+
>+ Some code is adapted from regrename.c (regrename_optimize).
>+ If original register is available, function returns it.
>+ Otherwise it performs the checks, so the new register should
>+ comply with the following:
>+ - it should not be in the UNAVAILABLE set;
>+ - it should be in the class compatible with original uses;
>+ - it should not be clobbered through reference with different mode;
>+ - if we're in the leaf function, then the new register should
>+ not be in the LEAF_REGISTERS;
>+ - etc.
>+
>+ Most of this conditions are checked in find_used_regs_1, and
>+ unavailable registers due to this restrictions are already included
>+ in UNAVAILABLE set.
>+
>+ If several registers meet the conditions, the register with smallest
>+ tick is returned to achieve more even register allocation.
>+
>+ If no register satisfies the above conditions, NULL_RTX is returned. */
>+ static rtx
>+ choose_best_reg_1 (HARD_REG_SET hard_regs_used,
>+ struct reg_rename *reg_rename_p,
>+ def_list_t original_insns, bool *is_orig_reg_p_ptr)


Descriptions of `hard_regs_used', `reg_rename_p', and
`is_orig_reg_p_ptr' are absent.

....


>+ /* Collect unavailable registers for EXPR from BNDS into USED_REGS. */
>+ static void
>+ collect_unavailable_regs_from_bnds (expr_t expr, blist_t bnds, regset used_regs,
>+ struct reg_rename *reg_rename_p,
>+ def_list_t *original_insns)


Please, document 'reg_rename_p' and `original_insns'.
...

>+
>+ /* Select and assign best register to EXPR. Set *IS_ORIG_REG_P to TRUE if
>+ original register was selected. Return FALSE if no register can be
>+ chosen, which could happen when:
>+ * EXPR_SEPARABLE_P is true but we were unable to find suitable register;
>+ * EXPR_SEPARABLE_P is false but the insn sets/clobbers one of the registers
>+ that are used on the moving path. */
>+ static bool
>+ find_best_reg_for_expr (expr_t expr, blist_t bnds, bool *is_orig_reg_p)
>+ {


Please add description of 'bnds'.
...

>+
>+ #ifdef ENABLE_CHECKING
>+   /* If after reload, make sure we're working with hard regs here.  */
>+   if (reload_completed) {
>+     reg_set_iterator rsi;
>+     unsigned i;
>+     EXECUTE_IF_SET_IN_REG_SET (used_regs, FIRST_PSEUDO_REGISTER, i, rsi)
>+       gcc_unreachable ();
>+   }

Please, fix formatting ({ on a new line).
...

>+ #define CANT_MOVE_TRAPPING(expr, through_insn)     \
>+   (VINSN_MAY_TRAP_P (EXPR_VINSN (expr))            \
>+    && !sel_insn_has_single_succ_p ((through_insn), SUCCS_ALL) \
>+    && !sel_insn_is_speculation_check (through_insn))
>+

Please, document the macro.
...

>+ static enum MOVEUP_EXPR_CODE
>+ moveup_expr (expr_t expr, insn_t through_insn, bool inside_insn_group,
>+             enum local_trans_type *ptrans_type)
>+ {
>+   vinsn_t vi = EXPR_VINSN (expr);
>+   insn_t insn = VINSN_INSN_RTX (vi);
>+   bool was_changed = false;
>+   bool as_rhs = false;
>+   ds_t *has_dep_p;
>+   ds_t full_ds;
>+
>+   /* When inside_insn_group, delegate to the helper.  */
>+   if (inside_insn_group)
>+     return moveup_expr_inside_insn_group (expr, through_insn);
>+
>+   /* Deal with unique insns and control dependencies.  */
>+   if (VINSN_UNIQUE_P (vi))
>+     {
>+       /* We can move jumps without side-effects or jumps that are
>+      mutually exculsive with instruction THROUGH_INSN (all in cases
                 ^ exclusive.
...

>+ /* Try to look at bitmap caches for EXPR and INSN pair, return true
>+    if successful.  */
>+ static bool
>+ try_bitmap_cache (expr_t expr, insn_t insn,
>+                   bool inside_insn_group,
>+                   enum MOVEUP_EXPR_CODE *res)

Please document `inside_insn_group' and `res'.
...

>+
>+ /* Try to look at bitmap caches for EXPR and INSN pair, return true
>+    if successful.  */
>+ static bool
>+ try_transformation_cache (expr_t expr, insn_t insn,
>+                           enum MOVEUP_EXPR_CODE *res)

Please document `res'.
...

+ if (sinfo->all_succs_n > 1
+ && sinfo->all_succs_n == sinfo->succs_ok_n)
+ {
+ /* Find EXPR'es that came from *all* successors and save them
+ into expr_in_all_succ_branches. This set will be used later
+ for calculating speculation attributes of EXPR'es. */
+ if (is == 0)
+ {
+ expr_in_all_succ_branches = av_set_copy (succ_set);
+
+ /* Remember the first successor for later. */
^ one more space
+ zero_succ = succ;
+ }
+ else
+ {
+ av_set_iterator i;
+ expr_t expr;
+ + FOR_EACH_EXPR_1 (expr, i, &expr_in_all_succ_branches)
+ if (!av_set_is_in_p (succ_set, EXPR_VINSN (expr)))
+ av_set_iter_remove (&i);
+ }
+ }
...


>+
>+ /* Computes the av_set below the last bb insn, doing all the 'dirty work' of
>+ handling multiple successors and properly merging its av_sets. */
>+ static av_set_t
>+ compute_av_set_at_bb_end (insn_t insn, ilist_t p, int ws)


Please document `p' and `ws'.
...

>+ }
>+
>+ static regset
>+ compute_live_after_bb (basic_block bb)

A comment for the function, please.
...

>+ /* Functions to check liveness restrictions on available instructions. */
>+
^
I see only one function. You could remove the comment.
...


>+ /* Find the set of registers that are unavailable for storing expres
>+ while moving ORIG_OPS up on the path starting from INSN due to
>+ liveness (USED_REGS) or hardware restrictions (REG_RENAME_P).
>+
>+ All the original operations found during the traversal are saved in the
>+ ORIGINAL_INSNS list.
>+
>+ REG_RENAME_P denotes the set of hardware registers that
>+ can not be used with renaming due to the register class restrictions,
>+ mode restrictions and other (the register we'll choose should be
>+ compatible class with the original uses, shouldn't be in call_used_regs,
>+ should be HARD_REGNO_RENAME_OK etc).
>+
>+ Returns TRUE if we've found all original insns, FALSE otherwise.
>+
>+ This function utilizes code_motion_path_driver (formerly find_used_regs_1)
>+ to traverse the code motion paths. This helper function finds registers
>+ that are not available for storing expres while moving ORIG_OPS up on the
>+ path starting from INSN. A register considered as used on the moving path,
>+ if one of the following conditions is not satisfied:
>+
>+ (1) a register not set or read on any path from xi to an instance of
>+ the original operation,
>+ (2) not among the live registers of the point immediately following the
>+ first original operation on a given downward path, except for the
>+ original target register of the operation,
>+ (3) not live on the other path of any conditional branch that is passed
>+ by the operation, in case original operations are not present on
>+ both paths of the conditional branch.
>+
>+ All the original operations found during the traversal are saved in the
>+ ORIGINAL_INSNS list.
>+
>+ CROSSES_CALL is true, if there is a call insn on the path from INSN to


                 ^ of REG_RENAME_P.
...

>+
>+ /* Filter out expressions that are pipelined too much.  */
                           ^ in av set given by AV_PTR.

>+ static void
>+ process_pipelined_exprs (av_set_t *av_ptr)
...

>+
>+ /* Turn AV into a vector, filter inappropriate insns and sort it. Return
>+ true if there is something to schedule. */
>+ static bool
>+ fill_vec_av_set (av_set_t av, blist_t bnds, fence_t fence,
>+ int *pneed_stall)


Please add description of `bnds', `fence', and `pneed_stall'.
...

+ if (target_available == true)
+ {
+ /* Do nothing -- we can use an existing register. */
+ is_orig_reg_p = EXPR_SEPARABLE_P (expr);
+ }
+ else if (/* Non-separable instruction will never
+ get another register. */
^ one more space.
+ (target_available == false
+ && !EXPR_SEPARABLE_P (expr))
+ /* Don't try to find a register for low-priority expression. */
+ || n >= max_insns_to_rename
+ /* ??? FIXME: Don't try to rename data speculation. */
+ || (EXPR_SPEC_DONE_DS (expr) & BEGIN_DATA)
+ || ! find_best_reg_for_expr (expr, bnds, &is_orig_reg_p))


...

>+ /* Initialize ready list from the AV for the max_issue () call.
>+    If any unrecognizable insn found in the AV, return it (and skip
>+    max_issue).  BND and FENCE are current boundary and fence,
>+    respectively.  */
>+ static expr_t
>+ fill_ready_list (av_set_t *av_ptr, blist_t bnds, fence_t fence,
>+                  int *pneed_stall)

Please add description of `pneed_stall'.
...


>+ /* Invoke reorder* target hooks on the ready list. Return the number of insns
>+ we can issue. */
>+ static int
>+ invoke_reorder_hooks (fence_t fence)


Please document `fence'.
...


>+ /* Call the rest of the hooks after the choice was made. Return >+ the number of insns that still can be issued. */ >+ static int >+ invoke_aftermath_hooks (fence_t fence, rtx best_insn, int issue_more)

Please add description of 'fence`, 'best_insn`, and `issue_more'.

...

>+ static int fill_insns_run = 0;
>+

Please document the function.
...

>+ static void
>+ remove_insns_for_debug (blist_t bnds, av_set_t *av_vliw_p)
>+ {
...

>+ /* Compute available instructions on boundaries. */
^ BNDS
>+ static void
>+ compute_av_set_on_boundaries (fence_t fence, blist_t bnds, av_set_t *av_vliw_p)


Add descriptions of `fence' and `av_vliw_p', please.
...


>+ /* Calculate the sequential av set corresponding to the EXPR_VLIW
>+ expression. */
>+ static av_set_t
>+ find_sequential_best_exprs (bnd_t bnd, expr_t expr_vliw, bool for_moveop)


Add descriptions of `bnd' and `for_moveop', please.
...


>+ /* Find original instructions for EXPR_SEQ and move it to BND boundary. >+ Return the expression to emit in C_EXPR. */
>+ static void
>+ move_exprs_to_boundary (bnd_t bnd, expr_t expr_vliw,
>+ av_set_t expr_seq, expr_t c_expr)


Add descriptions of `fence' and `av_vliw_p', please.
...


>+ /* Update FENCE on which INSN was scheduled and this INSN, too. */ >+ static void >+ update_fence_and_insn (fence_t fence, insn_t insn, int need_stall) >+ {

Please, document `need_stall'.
...

>+ /* Update boundary BND with INSN and add new boundaries to BNDS_TAIL_P. */
>+ static blist_t *
>+ update_boundaries (bnd_t bnd, insn_t insn, blist_t *bndsp,
>+ blist_t *bnds_tailp)


Please, document `bndsp' and the function result.
...

>+ /* This function is called after the last successor. Copies LP->C_EXPR_MERGED
>+ into SP->CEXPR. */
>+ static void
>+ move_op_after_merge_succs (cmpd_local_params_p lp, void *sparams)
>+ { >+ moveop_static_params_p sp = sparams;


Add blank line here, please.

>+   sp->c_expr = lp->c_expr_merged;
>+ }

...

>+ /* Emit a register-register copy for INSN if needed.  Return true if
>+    emitted one.  */
>+ static bool
>+ maybe_emit_renaming_copy (rtx insn,
>+                           moveop_static_params_p params)

Please, add description of `params'.
...

>+ /* Emit a speculative check for INSN if needed.  Return true if we've
>+    emitted one.  */
>+ static bool
>+ maybe_emit_speculative_check (rtx insn, expr_t expr,
>+                               moveop_static_params_p params)

Please, add description of `params'.
...

>+ /* Handle transformations that leave an insn in place of original
>+    insn such as renaming/speculation.  Return true if one of such
>+    transformations actually happened, and we have emitted this insn.  */
>+ static bool
>+ handle_emitting_transformations (rtx insn, expr_t expr,
>+                                  moveop_static_params_p params)

Please, add description of `params'.
...

>+ /* Remove INSN from stream to schedule it later.  */
>+ static void
>+ remove_insn_from_stream (rtx insn, bool only_disconnect)
>+ {

Please, add description of `only_disconnect'.
...


>+ /* This function is called when original expr is found.
>+ INSN - current insn traversed, EXPR - the corresponding expr found. */
>+ static void
>+ move_op_orig_expr_found (insn_t insn, expr_t expr,
>+ cmpd_local_params_p lparams ATTRIBUTE_UNUSED,
>+ void *static_params)


Please, add descriptions of `lparams' and `static_params'.
...

>+ /* Traverse all successors of INSN. For each successor that is SUCCS_NORMAL
>+ code_motion_path_driver is called recursively. Original operation
>+ was found at least on one path that is starting with one of INSN's
>+ successors (this fact is asserted). */


Please, add descriptions of last three parameters and the function
result.

>+ static int
>+ code_motion_process_successors (insn_t insn, av_set_t orig_ops,
>+                                 ilist_t path, void *static_params)
>+ {

...

>+ /* Perform a cleanup when the driver is about to terminate.  */
>+ static inline void
>+ code_motion_path_driver_cleanup (av_set_t *orig_ops_p, ilist_t *path_p)
>+ {

Please, add descriptions of the parameters.
...

>+ /* The driver function that implements move_op or find_used_regs
>+ functionality dependent whether code_motion_path_driver_INFO is set to
>+ &MOVE_OP_HOOKS or &FUR_HOOKS. This function implements the common parts
>+ of code (CFG traversal etc) that are shared among both functions. */
>+ static int
>+ code_motion_path_driver (insn_t insn, av_set_t orig_ops, ilist_t path,
>+ cmpd_local_params_p local_params_in,
>+ void *static_params)


Please, add descriptions of last three parameters and the function
result.
...

>+ /* Move up the operations from ORIG_OPS set traversing the dag starting
>+    from INSN.  PATH represents the edges traversed so far.
>+    REG is the register chosen for scheduling the current expr.  Insert
       ^
I think it should be DEST.

>+    bookkeeping code in the join points.  Returns TRUE.  */
>+ static bool
>+ move_op (insn_t insn, av_set_t orig_ops, expr_t expr_vliw,
>+          rtx dest, expr_t c_expr)

Please, add descriptions of parameters `expr_vliw', and `c_expr'.
...

>+ /* A helper for init_seqno. Traverse the region starting from BB and
>+ compute seqnos for visited insns, marking visited bbs in VISITED_BBS. */
>+ static void
>+ init_seqno_1 (basic_block bb, sbitmap visited_bbs, bitmap blocks_to_reschedule)


Please, add description of `blocks_to_reschedule'.
...

>+ /* Initialize seqnos for the current region. */
>+ static int
>+ init_seqno (int number_of_insns, bitmap blocks_to_reschedule, basic_block from)
>+ {


Please, add descriptions of the parameters and the function result.
...

>+ /* Schedule a parallel instruction group on each of FENCES.  */
>+ static void
>+ schedule_on_fences (flist_t fences, int max_seqno,
>+                     ilist_t **scheduled_insns_tailpp)

Please, add descriptions of the last two parameters.
...

>+ /* Update seqnos of SCHEDULED_INSNS.  */
                     ^ insns given by P

>+ static int
>+ update_seqnos_and_stage (int min_seqno, int max_seqno,
>+                          int highest_seqno_in_use,
>+                          ilist_t *pscheduled_insns)

Please, add descriptions of the first three parameters and the result
...

>+ /* The main driver for scheduling a region. This function is responsible
>+ for correct propagation of fences (i.e. scheduling points) and creating
>+ a group of parallel insns at each of them. It also supports
>+ pipelining. */
>+ static void
>+ sel_sched_region_2 (sel_sched_region_2_data_t data)


Descriptions of `data`, please.
...

>+ {
>+ int orig_max_seqno = data->orig_max_seqno;
>+ int highest_seqno_in_use = orig_max_seqno;
>+
>+ stat_bookkeeping_copies = 0;
>+ stat_insns_needed_bookkeeping = 0;
>+ stat_renamed_scheduled = 0;
>+ stat_substitutions_total = 0;
>+ num_insns_scheduled = 0;
>+
>+ while (fences)
>+ {
>+ int min_seqno, max_seqno;
>+ ilist_t scheduled_insns = NULL;
>+ ilist_t *scheduled_insns_tailp = &scheduled_insns;
>+
>+ find_min_max_seqno (fences, &min_seqno, &max_seqno);
>+ schedule_on_fences (fences, max_seqno, &scheduled_insns_tailp);
>+ fences = calculate_new_fences (fences, data, orig_max_seqno);
>+ highest_seqno_in_use = update_seqnos_and_stage (min_seqno, max_seqno,
>+ highest_seqno_in_use,
>+ &scheduled_insns);
>+ }
>+
>+ gcc_assert (data->orig_max_seqno == orig_max_seqno);
>+ data->highest_seqno_in_use = highest_seqno_in_use;
>+
>+ if (sched_verbose >= 1)
>+ sel_print ("Scheduled %d bookkeeping copies, %d insns needed "
>+ "bookkeeping, %d insns renamed, %d insns substituted\n",
>+ stat_bookkeeping_copies,
>+ stat_insns_needed_bookkeeping,
>+ stat_renamed_scheduled,
>+ stat_substitutions_total);
>+ }
>+
>+ /* Schedule a region. When pipelining, search for possibly never scheduled
>+ bookkeeping code and schedule it. Reschedule pipelined code without
>+ pipelining after. */
>+ static void
>+ sel_sched_region_1 (void)
>+ {
>+ struct sel_sched_region_2_data_def _data, *data = &_data;
>+ int number_of_insns;
>+
>+ /* Remove empty blocks that might be in the region from the beginning. >+ We need to do save sched_max_luid before that, as it actually shows
>+ the number of insns in the region, and purge_empty_blocks can
>+ alter it. */
>+ number_of_insns = sched_max_luid - 1;
>+ purge_empty_blocks ();
>+
>+ data->orig_max_seqno = init_seqno (number_of_insns, NULL, NULL);
>+ gcc_assert (data->orig_max_seqno >= 1);
>+
>+ /* When pipelining outer loops, create fences on the loop header,
>+ not preheader. */
>+ fences = NULL;
>+ if (current_loop_nest)
>+ init_fences (BB_END (EBB_FIRST_BB (0)));
>+ else
>+ init_fences (bb_note (EBB_FIRST_BB (0)));
>+ global_level = 1;
>+
>+ sel_sched_region_2 (data);
>+
>+ gcc_assert (fences == NULL);
>+
>+ if (pipelining_p)
>+ {
>+ int i;
>+ insn_t head;
>+ basic_block bb;
>+ struct flist_tail_def _new_fences;
>+ flist_tail_t new_fences = &_new_fences;
>+
>+ pipelining_p = false;
>+ max_ws = MIN (max_ws, issue_rate * 3 / 2);
>+ bookkeeping_p = false;
>+ enable_schedule_as_rhs_p = false;
>+
>+ if (!flag_sel_sched_reschedule_pipelined)
>+ {
>+ /* Schedule newly created code, that has not been scheduled yet. */
>+ bool do_p = true;
>+
>+ while (do_p)
>+ {
>+ do_p = false;
>+
>+ for (i = 0; i < current_nr_blocks; i++)
>+ {
>+ basic_block bb = EBB_FIRST_BB (i);
>+
>+ if (sel_bb_empty_p (bb))
>+ {
>+ bitmap_clear_bit (blocks_to_reschedule, bb->index);
>+ continue;
>+ }
>+
>+ if (bitmap_bit_p (blocks_to_reschedule, bb->index))
>+ {
>+ clear_outdated_rtx_info (bb);
>+ if (sel_insn_is_speculation_check (BB_END (bb))
>+ && JUMP_P (BB_END (bb)))
>+ bitmap_set_bit (blocks_to_reschedule,
>+ BRANCH_EDGE (bb)->dest->index);
>+ }
>+ else if (INSN_SCHED_TIMES (sel_bb_head (bb)) <= 0)
>+ bitmap_set_bit (blocks_to_reschedule, bb->index);
>+ }
>+
>+ for (i = 0; i < current_nr_blocks; i++)
>+ {
>+ bb = EBB_FIRST_BB (i);
>+
>+ /* While pipelining outer loops, skip bundling for loop
>+ preheaders. Those will be rescheduled in the outer
>+ loop. */
>+ if (sel_is_loop_preheader_p (bb))
>+ {
>+ clear_outdated_rtx_info (bb);
>+ continue;
>+ }
>+ >+ if (bitmap_bit_p (blocks_to_reschedule, bb->index))
>+ {
>+ flist_tail_init (new_fences);
>+
>+ data->orig_max_seqno = init_seqno (0, blocks_to_reschedule, bb);
>+
>+ /* Mark BB as head of the new ebb. */
>+ bitmap_set_bit (forced_ebb_heads, bb->index);
>+
>+ bitmap_clear_bit (blocks_to_reschedule, bb->index);
>+ >+ gcc_assert (fences == NULL);
>+
>+ init_fences (bb_note (bb));
>+ >+ sel_sched_region_2 (data);
>+ >+ do_p = true;
>+ break;
>+ }
>+ }
>+ }
>+ }
>+ else
>+ {
>+ basic_block loop_entry, loop_preheader = EBB_FIRST_BB (0);
>+
>+ /* Schedule region pre-header first, if not pipelining
>+ outer loops. */
>+ bb = EBB_FIRST_BB (0);
>+ head = sel_bb_head (bb);
>+ loop_entry = EBB_FIRST_BB (1);
>+ >+ /* Don't leave old flags on insns in loop preheader. */
>+ if (sel_is_loop_preheader_p (loop_preheader)) >+ {
>+ basic_block prev_bb = loop_preheader->prev_bb;
>+
>+ /* If... */
>+ if (/* Preheader is empty; */
>+ sel_bb_empty_p (loop_preheader)
>+ /* Block before preheader is in current region and
>+ contains only unconditional jump to header. */
>+ && in_current_region_p (prev_bb)
>+ && NEXT_INSN (bb_note (prev_bb)) == BB_END (prev_bb)
>+ && jump_leads_only_to_bb_p (BB_END (prev_bb),
>+ loop_preheader->next_bb))
>+ {
>+ /* Then remove empty preheader and unnecessary jump from
>+ previous block of preheader (usually latch). */
>+
>+ if (current_loop_nest->latch == prev_bb)
>+ current_loop_nest->latch = NULL;
>+
>+ /* Remove latch! */
>+ clear_expr (INSN_EXPR (BB_END (prev_bb)));
>+ sel_redirect_edge_and_branch (EDGE_SUCC (prev_bb, 0),
>+ loop_preheader);
>+
>+ /* Correct wrong moving of header to BB. */
>+ if (current_loop_nest->header == loop_preheader)
>+ current_loop_nest->header = loop_preheader->next_bb;
>+
>+ gcc_assert (EDGE_SUCC (prev_bb, 0)->flags & EDGE_FALLTHRU);
>+
>+ /* Empty basic blocks should not have av and lv sets. */
>+ free_data_sets (prev_bb);
>+
>+ gcc_assert (BB_AV_SET (loop_preheader) == NULL);
>+ gcc_assert (sel_bb_empty_p (loop_preheader)
>+ && sel_bb_empty_p (prev_bb));
>+
>+ sel_remove_empty_bb (prev_bb, false, true);
>+ sel_remove_empty_bb (loop_preheader, false, true);
>+ preheader_removed = true;
>+ loop_preheader = NULL;
>+ }
>+
>+ /* If BB was not deleted. */
>+ if (loop_preheader)
>+ clear_outdated_rtx_info (loop_preheader);
>+ }
>+
>+ /* Reschedule pipelined code without pipelining. */
>+ for (i = BLOCK_TO_BB (loop_entry->index); i < current_nr_blocks; i++)
>+ clear_outdated_rtx_info (EBB_FIRST_BB (i));
>+
>+ data->orig_max_seqno = init_seqno (0, NULL, NULL);
>+ flist_tail_init (new_fences);
>+
>+ /* Mark BB as head of the new ebb. */
>+ bitmap_set_bit (forced_ebb_heads, loop_entry->index);
>+ >+ gcc_assert (fences == NULL);
>+
>+ if (loop_preheader)
>+ init_fences (BB_END (loop_preheader));
>+ else
>+ init_fences (bb_note (loop_entry));
>+
>+ sel_sched_region_2 (data);
>+ }
>+ }
>+ }
>+
>+ /* Schedule the RGN region. */
>+ void
>+ sel_sched_region (int rgn)
>+ {
>+ if (sel_region_init (rgn))
>+ return;
>+
>+ gcc_assert (preheader_removed == false);
>+
>+ sel_dump_cfg ("after-region-init");
>+
>+ if (sched_verbose >= 1)
>+ sel_print ("Scheduling region %d\n", rgn);
>+
>+ {
>+ /* Decide if we want to schedule this region. */
>+ int region;
>+ int region_start;
>+ int region_stop;
>+ bool region_p;
>+ bool schedule_p;
>+ >+ region = ++sel_sched_region_run;
>+ region_start = PARAM_VALUE (PARAM_REGION_START);
>+ region_stop = PARAM_VALUE (PARAM_REGION_STOP);
>+ region_p = (PARAM_VALUE (PARAM_REGION_P) == 1);
>+
>+ if (region_p)
>+ schedule_p = (region_start <= region) && (region <= region_stop);
>+ else
>+ schedule_p = (region_start > region) || (region > region_stop);
>+
>+ if (sched_is_disabled_for_current_region_p ())
>+ schedule_p = false;
>+
>+ if (schedule_p)
>+ sel_sched_region_1 ();
>+ else
>+ /* Force initialization of INSN_SCHED_CYCLEs for correct bundling. */
>+ reset_sched_cycles_p = true;
>+ }
>+
>+ sel_region_finish ();
>+ preheader_removed = false;
>+ >+ sel_dump_cfg_1 ("after-region-finish",
>+ SEL_DUMP_CFG_CURRENT_REGION | SEL_DUMP_CFG_LV_SET
>+ | SEL_DUMP_CFG_BB_INSNS);
>+ }
>+
>+ /* Perform global init for the scheduler. */
>+ static void
>+ sel_global_init (void)
>+ {
>+ calculate_dominance_info (CDI_DOMINATORS);
>+ alloc_sched_pools ();
>+
>+ /* Setup the infos for sched_init. */
>+ sel_setup_sched_infos ();
>+ setup_sched_dump ();
>+
>+ sched_rgn_init (false);
>+ sched_init ();
>+
>+ sched_init_bbs ();
>+ /* Reset AFTER_RECOVERY if it has been set by the 1st scheduler pass. */
>+ after_recovery = 0;
>+ can_issue_more = issue_rate; >+
>+ sched_extend_target ();
>+ sched_deps_init (true);
>+ setup_nop_and_exit_insns ();
>+ sel_extend_global_bb_info ();
>+ init_lv_sets ();
>+ init_hard_regs_data ();
>+ }
>+
>+ /* Free the global data of the scheduler. */
>+ static void
>+ sel_global_finish (void)
>+ {
>+ free_bb_note_pool ();
>+ free_lv_sets ();
>+ sel_finish_global_bb_info ();
>+
>+ free_regset_pool ();
>+ free_nop_and_exit_insns ();
>+
>+ sched_rgn_finish ();
>+ sched_deps_finish ();
>+ sched_finish ();
>+
>+ if (current_loops)
>+ sel_finish_pipelining ();
>+
>+ free_sched_pools ();
>+ free_dominance_info (CDI_DOMINATORS);
>+ }
>+
>+ /* Return true when we need to skip selective scheduling. Used for debugging. */
>+ bool
>+ maybe_skip_selective_scheduling (void)
>+ {
>+ int now;
>+ int start;
>+ int stop;
>+ bool do_p;
>+ static int sel1_run = 0;
>+ static int sel2_run = 0;
>+
>+ if (!reload_completed)
>+ {
>+ now = ++sel1_run;
>+ start = PARAM_VALUE (PARAM_SEL1_START);
>+ stop = PARAM_VALUE (PARAM_SEL1_STOP);
>+ do_p = (PARAM_VALUE (PARAM_SEL1_P) == 1);
>+ }
>+ else
>+ {
>+ now = ++sel2_run;
>+ start = PARAM_VALUE (PARAM_SEL2_START);
>+ stop = PARAM_VALUE (PARAM_SEL2_STOP);
>+ do_p = (PARAM_VALUE (PARAM_SEL2_P) == 1);
>+ }
>+
>+ if (do_p)
>+ do_p = (start <= now) && (now <= stop);
>+ else
>+ do_p = (start > now) || (now > stop);
>+ >+ return !do_p;
>+ }
>+
>+ /* The entry point. */
>+ void
>+ run_selective_scheduling (void)
>+ {
>+ int rgn;
>+
>+ /* Taking care of this degenerate case makes the rest of
>+ this code simpler. */
>+ if (n_basic_blocks == NUM_FIXED_BLOCKS)
>+ return;
>+
>+ setup_dump_cfg_params ();
>+
>+ sel_dump_cfg_1 ("before-init",
>+ (SEL_DUMP_CFG_BB_INSNS | SEL_DUMP_CFG_FUNCTION_NAME));
>+
>+ sel_global_init ();
>+
>+ for (rgn = 0; rgn < nr_regions; rgn++)
>+ {
>+ char *buf;
>+ int buf_len = 1 + snprintf (NULL, 0, "before-region-%d", rgn);
>+
>+ buf = xmalloc (buf_len * sizeof (*buf));
>+ snprintf (buf, buf_len, "before-region-%d", rgn);
>+ sel_dump_cfg_1 (buf, SEL_DUMP_CFG_LV_SET | SEL_DUMP_CFG_BB_INSNS);
>+
>+ sel_sched_region (rgn);
>+
>+ snprintf (buf, buf_len, "after-region-%d", rgn);
>+ sel_dump_cfg_1 (buf, SEL_DUMP_CFG_LV_SET | SEL_DUMP_CFG_BB_INSNS);
>+ free (buf);
>+ }
>+
>+ sel_global_finish ();
>+
>+ sel_dump_cfg_1 ("after-finish",
>+ (SEL_DUMP_CFG_BB_INSNS | SEL_DUMP_CFG_FUNCTION_NAME));
>+ }
>+
>+ #endif
>diff -cprNd -x .svn -x .hg trunk/gcc/sel-sched.h sel-sched-branch/gcc/sel-sched.h
>*** trunk/gcc/sel-sched.h Thu Jan 1 03:00:00 1970
>--- sel-sched-branch/gcc/sel-sched.h Fri May 23 18:48:33 2008
>***************
>*** 0 ****
>--- 1,27 ----
>+ /* Instruction scheduling pass. >+ Copyright (C) 2006, 2007, 2008 Free Software Foundation, Inc.
>+
>+ This file is part of GCC.
>+
>+ GCC is free software; you can redistribute it and/or modify it under
>+ the terms of the GNU General Public License as published by the Free
>+ Software Foundation; either version 3, or (at your option) any later
>+ version.
>+
>+ GCC is distributed in the hope that it will be useful, but WITHOUT ANY
>+ WARRANTY; without even the implied warranty of MERCHANTABILITY or
>+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
>+ for more details.
>+
>+ You should have received a copy of the GNU General Public License
>+ along with GCC; see the file COPYING3. If not see
>+ <http://www.gnu.org/licenses/>. */
>+
>+ #ifndef GCC_SEL_SCHED_H
>+ #define GCC_SEL_SCHED_H
>+
>+ /* The main entry point. */
>+ extern void run_selective_scheduling (void);
>+ extern bool maybe_skip_selective_scheduling (void);
>+
>+ #endif /* GCC_SEL_SCHED_H */



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]