This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Review of machine independent part of selective scheduler -- part1


Patch for machine-independent part of selective scheduler is pretty
big (a bit less 1MB).  So I've decided to do review in several parts.
This part is about changes in existing gcc files (mostly in existing
insn scheduler).  I hope, the review of the new files will be finished
and posted on next week.

Some common comments.

There are too many flags and parameters.  You should probably work on
minimizing of their number.  I understand that selective scheduling is
a complicated algorithm and minimizing the number of flags needs a lot
of experiments.  But I don't think that users will be really use all
of them.

I found some inconsistency in prototyping static functions (sometimes
prototypes are absent for the new functions).  Actually you don't need
to add static function prototypes unless it is really necessary for
forward declarations.  It will make code easier to read and maintain.
So many static function prototypes in the scheduler is a rudiment from
time when gcc should have been compiled by K&R C compiler (now
standard C compiler is necessary for this).

There are a lot of mistakes and omissions in the ChangeLog.  It looks
like you were in hurry to send the patch.  The accuracy of the
Changelog is important in searching bug occurrences.


2008-06-03 Andrey Belevantsev <abel@ispras.ru>
Dmitry Melnik <dm@ispras.ru>
Dmitry Zhurikhin <zhur@ispras.ru>
Alexander Monakov <amonakov@ispras.ru>
Maxim Kuvyrkov <maxim@codesourcery.com>
* sel-sched.h, sel-sched-dump.h, sel-sched-ir.h, sel-sched.c,
sel-sched-dump.c, sel-sched-ir.c: New files.


...


* common.opt (fsel-sched-bookkeeping, fsel-sched-pipelining, fsel-sched-pipelining-outer-loops, fsel-sched-renaming, fsel-sched-substitution, fselective-scheduling): New flags.


A lot of options are absent: fselective-scheduling, fsel-sched-reschedule-pipelined, fsel-sched-restrict-pipelining, fsel-sched-dump-cfg, fsel-insn-range.


* haifa-sched.c: Include vecprim.h.

cfgloop.h is absent.


   (issue_rate, sched_verbose_param, note_list, dfa_state_size,
   ready_try, cycle_issued_insns, dfa_lookahead, max_luid, spec_info):
   Make global.

dfa_lookahead is actually a new variable.



   (old_max_uid, old_last_basic_block): Remove.
   (h_i_d): Make it a vector.
   (INSN_TICK, INTER_TICK, QUEUE_INDEX, INSN_COST): Make them work
   through HID macro.
   (after_recovery, adding_bb_to_current_region_p):
   New variables to handle correct insertion of the recovery code.
   (struct ready_list): Move declaration to sched-int.h.
   (rgn_n_insns): Removed.
   (rtx_vec_t): Move to sched-int.h.
   (find_insn_reg_weight): Remove.
   (find_insn_reg_weight1): Rename to find_insn_reg_weight.
   (extend_h_i_d, init_h_i_d, haifa_init_h_i_d, haifa_finish_h_i_d):
   New functions to initialize / finalize haifa instruction data.

extend_h_i_d and init_h_i_d are not new.


(dep_weak): Move to sched-deps.c. Rename to ds_weak.

It is not renamed everywhere (please see for example try_ready).


   (unlink_other_notes): Move logic to add_to_note_list.  Handle
   selective scheduler.
   (ready_lastpos, ready_element, ready_sort, reemit_notes, move_insn,
   find_fallthru_edge): Make global, remove static prototypes.
   (max_issue): Add privileged_n and state parameters.  Use them.

You missed that you made it global.


   (extend_global, extend_all): Removed.
   (init_before_recovery): Add new param.  Fix the handling of the case
   when we insert a recovery code before the EXIT which has a predecessor
   with a fallthrough edge to it.
   (create_recovery_block): Make global.  Rename to
   sched_create_recovery_block.  Update.
   (change_pattern): Rename to sched_change_pattern.  Make global.
   (speculate_insn): Rename to sched_speculate_insn.  Make global.
   Split haifa-specific functionality into ...
   (haifa_change_pattern): New static function.
   (sched_extend_bb, sched_init_bb): New static functions.

Should be sched_init_bbs.


   (sched_extend_bb): Add the prototype.
   (current_sched_info): Change type to ...
   (struct haifa_sched_info): ... this.  New structure.  Move
   Haifa-specific fields from struct sched_info.

"New structure. Move Haifa-specific fields from struct sched_info." should be in log entry for sched-int.h.


   (insn_cost): Adjust for selective scheduling.
   (dep_cost_1): New static function.  Prototype it.  Move logic from ...

It is not static. And it was not prototyped.


(insn_cost1): ... here.

There is no insn_cost1. You probably meant dep_cost.


(dep_cost): Use dep_cost_1.

Uneccessary indentation.


   (priority): Adjust to work with selective scheduling.  Use
   sched_deps_info instead of current_sched_info.  Process the corner
   case when all dependencies don't contribute to priority.

"Use sched_deps_info instead..." is for contributes_to_priority_p.


   (rank_for_schedule): Use ds_weak instead of dep_weak.
   (advance_state): New function.  Move logic from ...
   (advance_one_cycle): ... here.
   (add_to_note_list, concat_note_lists): New functions.
   (rm_other_notes): Make static.  Adjust for selective scheduling.
   (remove_notes, restore_other_notes): New functions.
   (move_insn): Don't call reemit_notes.
   (choose_ready): Remove lookahead variable, use dfa_lookahead.
   Remove more_issue, max_points.  Move the code to initialize
   max_lookahead_tries to max_issue.
   (schedule_block): Remove rgn_n_insns1 parameter.  Don't allocate
   ready.  Adjust uses of move_insn.  Call restore_other_notes.
   (luid): Remove.
   (sched_init, sched_finish): Move Haifa-specific initialization/
   finalization to ...
   (haifa_sched_init, haifa_sched_finish): ... respectively.
   New functions.
   (setup_sched_dump): New function.
   (haifa_init_only_bb): New static function.
   (haifa_speculate_insn): New static function.
   (try_ready): Use haifa_* instead of speculate_insn and
   change_pattern.
   (extend_ready, extend_all): Remove.
   (sched_extend_ready_list, sched_finish_ready_list): New functions.
   (create_check_block_twin, add_to_speculative_block): Use
   haifa_insns_init instead of extend_global.  Update to use new
   initialization functions.  Change parameter.
   (add_block): Remove.
   (sched_scan_info): New.
   (extend_bb, init_bb, extend_insn, init_insn, init_insns_in_bb,
   sched_scan): New static functions for walking through scheduling
   region.

Extend_bb is not a new function. probably it a its rewriting becuase the function existed before.

(sched_init_bbs): New functions to init / finalize

sched_init_bbs is already mentioned above.


basic block information.
(sched_luids): New vector variable to replace uid_to_luid.
(luids_extend_insn): New function.
(sched_max_luid): New variable.
(luids_init_insn): New function.
(sched_init_luids, sched_finish_luids): New functions.
(insn_luid): New debug function.
(sched_extend_target): New function.
(haifa_init_insn): New static function.
(sched_init_only_bb): New hook.
(sched_split_block): New hook.
(sched_split_block_1): New function.
(sched_create_empty_bb): New hook.
(sched_create_empty_bb_1): New function. (common_sched_info, ready): New global variables.
(current_sched_info_var): Remove.
(move_block_after_check): Use common_sched_info. (haifa_luid_for_non_insn): New static function. (init_before_recovery): Use haifa_init_only_bb instead of
add_block.


   * modulo-sched.c: (sms_sched_info): Rename to sms_common_sched_info.
   (sms_sched_deps_info, sms_sched_info): New.
   (setup_sched_infos): New.
   (sms_schedule): Initialize them.  Call haifa_sched_init/finish.
   Do not call regstat_free_calls_crossed, as it called by sched_init.

You don't need to write ", as it called by sched_init" because changelog entries are only for changes not for their reasons.

(sms_print_insn): Use const_rtx.


...


* sched-deps.c (sched_deps_info): New. Update all relevant uses of
current_sched_info to use it.
(enum reg_pending_barrier_mode): Move to sched-int.h.
(h_d_i_d): New variable. Initialize to NULL.
({true, output, anti, spec, forward}_dependency_cache): Initialize
to NULL.
(sched_has_condition_p): New function. Adjust users of
sched_get_condition to use it instead.
(conditions_mutex_p): Add arguments indicating which conditions are
reversed. Use them.
(sched_get_condition_with_rev): Rename from sched_get_condition. Add
argument to indicate whether returned condition is reversed. Do not
generate new rtx when condition should be reversed; indicate it by
setting new argument instead.
(add_dependence_list_and_free): Add deps parameter.
Update all users. Do not free dependence list when
deps context is readonly.
(add_insn_mem_dependence, flush_pending_lists): Adjust for readonly
contexts.
(remove_from_dependence_list, remove_from_both_dependence_lists): New.
(remove_from_deps): New. Use the above functions. (deps_analyze_insn): Do not flush pending write lists on speculation
checks. Do not make speculation check a scheduling barrier for memory
references.
(cur_max_luid, cur_insn, can_start_lhs_rhs_p): New static variables.

There is no such variable cur_max_luid.


   (add_or_update_back_dep_1): Initialize present_dep_type.
   (haifa_start_insn, haifa_finish_insn, haifa_note_reg_set,
   haifa_note_reg_clobber, haifa_note_reg_use, haifa_note_mem_dep,
   haifa_note_dep): New functions implementing dependence hooks for
   the Haifa scheduler.
   (note_reg_use, note_reg_set, note_reg_clobber, note_mem_dep,
   note_dep): New functions.
   (ds_to_dt): New function.
   (sched_analyze_reg, sched_analyze_1, sched_analyze_2,
   sched_analyze_insn): Update to use dependency hooks infrastructure
   and readonly contexts.
   (deps_analyze_insn): New function.  Move part of logic from ...
   (sched_analyze): ... here.  Also move some logic to ...
   (deps_start_bb): ... here.  New function.
   (add_forw_dep, delete_forw_dep): Guard use of INSN_DEP_COUNT with
   sel_sched_p.

I don't see it in the patch.


...


* sched-int.h: Include basic-block.h and vecprim.h. (sched_verbose_param, enum sched_pass_id_t, bb_vec_t, insn_vec_t, rtx_vec_t): New. (struct sched_scan_info_def): New structure. (sched_scan_info, sched_scan, sched_init_bbs, sched_init_luids, sched_finish_luids, sched_extend_target, haifa_init_h_i_d, haifa_finish_h_i_d): Declare. (struct common_sched_info_def): New. (common_sched_info, haifa_common_sched_info, sched_emulate_haifa_p): Declare. (sel_sched_p): New. (sched_luids): Declare. (INSN_LUID, LUID_BY_UID, SET_INSN_LUID): Declare. (sched_max_luid, insn_luid): Declare. (note_list, remove_notes, restore_other_notes, bb_note): Declare. (sched_insns_init, sched_insns_finish, xrecalloc, move_insn, reemit_notes, print_insn, print_pattern, print_value, haifa_classify_insn, sel_find_rgns, sel_mark_hard_insn, dfa_state_size, advance_state, setup_sched_dump, sched_init, sched_finish, sel_insn_is_speculation_check): Export. (struct ready_list): Move from haifa-sched.c. (ready_try, ready, max_issue): Export. (find_fallthru_edge, sched_init_only_bb, sched_split_block, sched_split_block_1, sched_create_empty_bb, sched_create_empty_bb_1, sched_create_recovery_block, sched_create_recovery_edges): Export. (enum reg_pending_barrier_mode): Export. (struct deps): New fields `last_reg_pending_barrier' and `readonly'. (deps_t): New. (struct sched_info): Move compute_jump_reg_dependencies, use_cselib ... (struct haifa_insn_data): and cant_move to ... (struct sched_deps_info_def): ... this new structure. (h_i_d): Export.

It was already external. You changed its type.



   (HID): New accessor macro.  Rewrite h_i_d accessor macros through HID.
   (struct region): Move from sched-rgn.h.
   (nr_regions, rgn_table, rgn_bb_table, block_to_bb, containing_rgn,
   RGN_NR_BLOCKS, RGN_BLOCKS, RGN_DONT_CALC_DEPS, RGN_HAS_REAL_EBB,
   BLOCK_TO_BB, CONTAINING_RGN): Export.
   (ebb_head, BB_TO_BLOCK, EBB_FIRST_BB, EBB_LAST_BB, INSN_BB): Likewise.
   (current_nr_blocks, current_blocks, target_bb): Likewise.
   (sched_is_disabled_for_current_region_p, sched_rgn_init, sched_rgn_finish,
   rgn_setup_region, sched_rgn_compute_dependencies, sched_rgn_local_init,
   extend_regions, rgn_make_new_region_out_of_new_block,
   compute_priorities, debug_rgn_dependencies,
   free_rgn_deps, contributes_to_priority, extend_rgns, deps_join
   rgn_setup_common_sched_info, rgn_setup_sched_infos, debug_regions,
   debug_region, dump_region_dot,     dump_region_dot_file,
   haifa_sched_init, haifa_sched_finish): Export.

* sched-rgn.c: Export region data structures.

It is too ambiguous. I think You should put the names of variables, macros, functions here. ChangeLog is useful instrument to find what happened to an object and object names are important.


   (debug_region, bb_in_region_p, dump_region_dot_file, dump_region_dot): New.
   (too_large): Use estimate_number_of_insns.
   (haifa_find_rgns): New. Move the code from ...
   (find_rgns): ... here.  Call either sel_find_rgns or haifa_find_rgns.
   (free_trg_info): New.
   (compute_trg_info): Allocate candidate tables here instead of ...
   (init_ready_list): ... here.
   (rgn_common_sched_info, rgn_const_sched_deps_info,
   rgn_const_sel_sched_deps_info, rgn_sched_deps_info): New.
   (deps_join): New, extracted from ...
   (propagate_deps): ... here.
   (free_rgn_deps, compute_priorities): New function.
^ functions
   (sched_rgn_init, sched_rgn_finish): New functions.
   (schedule_region): Use them.
   (sched_rgn_local_preinit, sched_rgn_local_init,

I did not find sched_rgn_local_preinit in the patch.


   sched_rgn_local_free, sched_rgn_local_finish): New functions.
   (rgn_make_new_region_out_of_new_block): New.


...


diff -cprNd -x .svn -x .hg trunk/gcc/common.opt sel-sched-branch/gcc/common.opt
*** trunk/gcc/common.opt Fri May 30 17:32:06 2008
--- sel-sched-branch/gcc/common.opt Wed Apr 16 00:17:32 2008
*************** fschedule-insns2
*** 929,934 ****
--- 929,984 ----
Common Report Var(flag_schedule_insns_after_reload) Optimization
Reschedule instructions after register allocation
+ ; This flag should be on when a target implements non-trivial
+ ; scheduling hooks, maybe saving some information for its own sake.
+ ; On IA64, for example, this is used for correct bundling. + fselective-scheduling
+ Common Report Var(flag_selective_scheduling) Optimization
+ Schedule instructions using selective scheduling algorithm
+ + fselective-scheduling2
+ Common Report Var(flag_selective_scheduling2) Optimization + Run selective scheduling after reload
+ + fsel-sched-bookkeeping
+ Common Report Var(flag_sel_sched_bookkeeping) Init(1) Optimization
+ Schedule instructions that require a copy to be moved
+ + fsel-sched-pipelining
+ Common Report Var(flag_sel_sched_pipelining) Init(0) Optimization
+ Perform software pipelining of inner loops during selective scheduling
+ + fsel-sched-pipelining-outer-loops
+ Common Report Var(flag_sel_sched_pipelining_outer_loops) Init(0) Optimization
+ Perform software pipelining of outer loops during selective scheduling
+


This option is not described in the doc.


+ fsel-sched-reschedule-pipelined
+ Common Report Var(flag_sel_sched_reschedule_pipelined) Init(0) Optimization
+ Reschedule pipelined regions without pipelining
+


This option is not described in the doc.

+ fsel-sched-restrict-pipelining=
+ Common RejectNegative Joined Report UInteger Var(flag_sel_sched_restrict_pipelining) Init(0)
+ Restrict the aggressiveness of selective pipelining
+


This option is not described in the doc.


+ fsel-sched-renaming
+ Common Report Var(flag_sel_sched_renaming) Init(1) Optimization
+ Do register renaming in selective scheduling
+ + fsel-sched-substitution
+ Common Report Var(flag_sel_sched_substitution) Init(1) Optimization
+ Perform substitution in selective scheduling
+ + fsel-sched-dump-cfg
+ Common Report Var(flag_sel_sched_dump_cfg) Init(0)
+ Dump CFG information during selective scheduling pass.
+ + fsel-insn-range
+ Common
+ + fsel-insn-range=
+ Common Joined RejectNegative
+ fsel-insn-range=<number> Expression that determines range of insns to handle with sel-sched
+


I'd remove options sel-insn-range because it was used for the
developement.  If it is really important you should document them.

 ; sched_stalled_insns means that insns can be moved prematurely from the queue
 ; of stalled insns into the ready list.
 fsched-stalled-insns

...

diff -cprNd -x .svn -x .hg trunk/gcc/doc/invoke.texi sel-sched-branch/gcc/doc/invoke.texi
*** trunk/gcc/doc/invoke.texi Fri May 30 17:31:04 2008
--- sel-sched-branch/gcc/doc/invoke.texi Fri Apr 18 13:23:48 2008
*************** Objective-C and Objective-C++ Dialects}.
*** 301,306 ****
--- 301,307 ----
-feliminate-unused-debug-symbols -femit-class-debug-always @gol
-fmem-report -fpre-ipa-mem-report -fpost-ipa-mem-report -fprofile-arcs @gol
-frandom-seed=@var{string} -fsched-verbose=@var{n} @gol
+ -fsel-sched-verbose -fsel-sched-dump-cfg -fsel-sched-pipelining-verbose @gol
-ftest-coverage -ftime-report -fvar-tracking @gol
-g -g@var{level} -gcoff -gdwarf-2 @gol
-ggdb -gstabs -gstabs+ -gvms -gxcoff -gxcoff+ @gol
*************** Objective-C and Objective-C++ Dialects}.
*** 350,355 ****
--- 351,358 ----
-fsched2-use-traces -fsched-spec-load -fsched-spec-load-dangerous @gol
-fsched-stalled-insns-dep[=@var{n}] -fsched-stalled-insns[=@var{n}] @gol
-fschedule-insns -fschedule-insns2 -fsection-anchors -fsee @gol
+ -fselective-scheduling -fselective-scheduling2 -fsel-sched-bookkeeping @gol
+ -fsel-sched-pipelining -fsel-sched-renaming -fsel-sched-substitution @gol
-fsignaling-nans -fsingle-precision-constant -fsplit-ivs-in-unroller @gol
-fsplit-wide-types -fstack-protector -fstack-protector-all @gol
-fstrict-aliasing -fstrict-overflow -fthread-jumps -ftracer -ftree-ccp @gol
*************** and unit/insn info. For @var{n} greater
*** 4943,4948 ****
--- 4946,4955 ----
at abort point, control-flow and regions info. And for @var{n} over
four, @option{-fsched-verbose} also includes dependence info.
+ @item -fsel-sched-dump-cfg
+ @opindex sel-sched-dump-cfg
+ Dump CFG information during selective scheduling pass.
+ @item -save-temps
@opindex save-temps
Store the usual ``temporary'' intermediate files permanently; place them
*************** The modulo scheduling comes before the t
*** 5735,5740 ****
--- 5742,5783 ----
was modulo scheduled we may want to prevent the later scheduling passes
from changing its schedule, we use this option to control that.
+ @item -fselective-scheduling
+ @opindex fselective-scheduling
+ Schedule instructions using selective scheduling algorithm. Selective
+ scheduling runs instead of the first haifa scheduler pass.
+ + @item -fselective-scheduling2
+ @opindex fselective-scheduling2
+ Schedule instructions using selective scheduling algorithm. Selective
+ scheduling runs instead of the second haifa scheduler pass.
+ + @item -fsel-sched-bookkeeping
+ @opindex fsel-sched-bookkeeping
+ Enable generation of bookkeeping code during selective scheduling. This option
+ allows insns to be moved through join point.
+ + @item -fsel-sched-pipelining
+ @opindex fsel-sched-pipelining
+ Enable software pipelining of loops during selective scheduling. This option
+ requires @option{-fsel-sched-bookkeeping}.
+ + @item -fsel-sched-renaming
+ @opindex fsel-sched-renaming
+ Enable register renaming during selective scheduling. This option allows the
+ scheduler to split insns into rhs and lhs, and choose a different register
+ for lhs, if instruction can not be scheduled at given point with the original
+ one.
+ + @item -fsel-sched-substitution
+ @opindex fsel-sched-substitution
+ Enable register substitution during selective scheduling. This option allows
+ to overcome true dependencies, while scheduling insns before assignments.
+ E.g.: scheduling 'z = x*2' before 'x = y' will yield 'z = y*2'.
+ This option is useful in combination with @option{-fsel-sched-pipelining} to
+ move instructions through back loop edges, though
+ @option{-fsel-sched-pipelining} is not required for substitution to work.
+

Descriptions of -fsel-sched-pipelining-outer-loops, -fsel-shed-reschedule-pipelined, -fsel-shed-restrict-pipelining, -fsel-shed-dump-cfg, and -fsel-insn-range are missed.

Do we really need so many options?  Some of them are used only for
debugging (e.g. fsel-insn-range) and should be removed.  I guess some
options does not results in better code generation in general case and
probably can be removed too.



@item -fcaller-saves
@opindex fcaller-saves
Enable values to be allocated in registers that will be clobbered by
*************** The minimal probability of speculation s
*** 7256,7261 ****
--- 7299,7309 ----
speculative insn will be scheduled.
The default value is 40.
+ @item selsched-max-lookahead
+ The maximum size of the lookahead window of selective scheduling. It is a
+ depth of search for available instructions.
+ The default value is 50.
+ @item max-last-value-rtl
The maximum size measured as number of RTLs that can be recorded in an expression
diff -cprNd -x .svn -x .hg trunk/gcc/flags.h sel-sched-branch/gcc/flags.h
*** trunk/gcc/flags.h Tue Apr 15 20:10:00 2008
--- sel-sched-branch/gcc/flags.h Wed Apr 16 00:46:03 2008
*************** extern int flag_evaluation_order;
*** 236,241 ****
--- 236,244 ----
extern unsigned HOST_WIDE_INT g_switch_value;
extern bool g_switch_set;
+ /* Same for selective scheduling. */
+ extern bool sel_sched_switch_set;
+

There is no log entry for this change.


/* Values of the -falign-* flags: how much to align labels in code. 0 means `use default', 1 means `don't align'. For each variable, there is an _log variant which is the power
diff -cprNd -x .svn -x .hg trunk/gcc/haifa-sched.c sel-sched-branch/gcc/haifa-sched.c
*** trunk/gcc/haifa-sched.c Tue Apr 15 20:10:00 2008
--- sel-sched-branch/gcc/haifa-sched.c Fri Apr 18 13:23:48 2008
*************** along with GCC; see the file COPYING3. *** 144,150 ****
--- 144,152 ----
#include "target.h"
#include "output.h"
#include "params.h"
+ #include "vecprim.h"
#include "dbgcnt.h"
+ #include "cfgloop.h"
#ifdef INSN_SCHEDULING
*************** along with GCC; see the file COPYING3. *** 152,158 ****
machine cycle. It can be defined in the config/mach/mach.h file,
otherwise we set it to 1. */
! static int issue_rate;
/* sched-verbose controls the amount of debugging output the
scheduler prints. It is controlled by -fsched-verbose=N:
--- 154,160 ----
machine cycle. It can be defined in the config/mach/mach.h file,
otherwise we set it to 1. */
! int issue_rate;
/* sched-verbose controls the amount of debugging output the
scheduler prints. It is controlled by -fsched-verbose=N:
*************** static int issue_rate;
*** 163,178 ****
N=3: rtl at abort point, control-flow, regions info.
N=5: dependences info. */
! static int sched_verbose_param = 0;
int sched_verbose = 0;
/* Debugging file. All printouts are sent to dump, which is always set,
either to stderr, or to the dump listing file (-dRS). */
FILE *sched_dump = 0;
- /* Highest uid before scheduling. */
- static int old_max_uid;
- /* fix_sched_param() is called from toplev.c upon detection
of the -fsched-verbose=N option. */
--- 165,177 ----
N=3: rtl at abort point, control-flow, regions info.
N=5: dependences info. */
! int sched_verbose_param = 0;
int sched_verbose = 0;
/* Debugging file. All printouts are sent to dump, which is always set,
either to stderr, or to the dump listing file (-dRS). */
FILE *sched_dump = 0;
/* fix_sched_param() is called from toplev.c upon detection
of the -fsched-verbose=N option. */
*************** fix_sched_param (const char *param, cons
*** 185,194 ****
warning (0, "fix_sched_param: unknown param: %s", param);
}
! struct haifa_insn_data *h_i_d;
! #define INSN_TICK(INSN) (h_i_d[INSN_UID (INSN)].tick)
! #define INTER_TICK(INSN) (h_i_d[INSN_UID (INSN)].inter_tick)
/* If INSN_TICK of an instruction is equal to INVALID_TICK,
then it should be recalculated from scratch. */
--- 184,195 ----
warning (0, "fix_sched_param: unknown param: %s", param);
}
! /* This is a placeholder for the scheduler parameters common ! to all schedulers. */
! struct common_sched_info_def *common_sched_info;
! #define INSN_TICK(INSN) (HID (INSN)->tick)
! #define INTER_TICK(INSN) (HID (INSN)->inter_tick)
/* If INSN_TICK of an instruction is equal to INVALID_TICK,
then it should be recalculated from scratch. */
*************** struct haifa_insn_data *h_i_d;
*** 202,213 ****
/* List of important notes we must keep around. This is a pointer to the
last element in the list. */
! static rtx note_list;
static struct spec_info_def spec_info_var;
/* Description of the speculative part of the scheduling.
If NULL - no speculation. */
! spec_info_t spec_info;
/* True, if recovery block was added during scheduling of current block.
Used to determine, if we need to fix INSN_TICKs. */
--- 203,214 ----
/* List of important notes we must keep around. This is a pointer to the
last element in the list. */
! rtx note_list;
static struct spec_info_def spec_info_var;
/* Description of the speculative part of the scheduling.
If NULL - no speculation. */
! spec_info_t spec_info = NULL;
/* True, if recovery block was added during scheduling of current block.
Used to determine, if we need to fix INSN_TICKs. */
*************** static int nr_begin_data, nr_be_in_data,
*** 224,235 ****
/* Array used in {unlink, restore}_bb_notes. */
static rtx *bb_header = 0;
- /* Number of basic_blocks. */
- static int old_last_basic_block;
- /* Basic block after which recovery blocks will be created. */
static basic_block before_recovery;
/* Queues, etc. */
/* An instruction is ready to be scheduled when all insns preceding it
--- 225,240 ----
/* Array used in {unlink, restore}_bb_notes. */
static rtx *bb_header = 0;
/* Basic block after which recovery blocks will be created. */
static basic_block before_recovery;
+ /* Basic block just before the EXIT_BLOCK and after recovery, if we have
+ created it. */
+ basic_block after_recovery;
+ + /* FALSE if we add bb to another region, so we don't need to initialize it. */
+ bool adding_bb_to_current_region_p = true;
+ /* Queues, etc. */
/* An instruction is ready to be scheduled when all insns preceding it
*************** static int q_size = 0;
*** 290,296 ****
QUEUE_READY - INSN is in ready list.
N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles. */
! #define QUEUE_INDEX(INSN) (h_i_d[INSN_UID (INSN)].queue_index)
/* The following variable value refers for all current and future
reservations of the processor units. */
--- 295,301 ----
QUEUE_READY - INSN is in ready list.
N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles. */
! #define QUEUE_INDEX(INSN) (HID (INSN)->queue_index)
/* The following variable value refers for all current and future
reservations of the processor units. */
*************** state_t curr_state;
*** 298,334 ****
/* The following variable value is size of memory representing all
current and future reservations of the processor units. */
! static size_t dfa_state_size;
/* The following array is used to find the best insn from ready when
the automaton pipeline interface is used. */
! static char *ready_try;
! ! /* Describe the ready list of the scheduler.
! VEC holds space enough for all insns in the current region. VECLEN
! says how many exactly.
! FIRST is the index of the element with the highest priority; i.e. the
! last one in the ready list, since elements are ordered by ascending
! priority.
! N_READY determines how many insns are on the ready list. */
! struct ready_list
! {
! rtx *vec;
! int veclen;
! int first;
! int n_ready;
! };
! /* The pointer to the ready list. */
! static struct ready_list *readyp;
/* Scheduling clock. */
static int clock_var;
- /* Number of instructions in current scheduling region. */
- static int rgn_n_insns;
- static int may_trap_exp (const_rtx, int);
/* Nonzero iff the address is comprised from at most 1 register. */
--- 303,323 ----
/* The following variable value is size of memory representing all
current and future reservations of the processor units. */
! size_t dfa_state_size;
/* The following array is used to find the best insn from ready when
the automaton pipeline interface is used. */
! char *ready_try = NULL;
! /* The ready list. */
! struct ready_list ready = {NULL, 0, 0, 0};
! /* The pointer to the ready list (to be removed). */
! static struct ready_list *readyp = &ready;

There is no log entry for initialization of readyp.


/* Scheduling clock. */
static int clock_var;
static int may_trap_exp (const_rtx, int);
/* Nonzero iff the address is comprised from at most 1 register. */
*************** static int may_trap_exp (const_rtx, int)
*** 342,347 ****
--- 331,369 ----
/* Returns a class that insn with GET_DEST(insn)=x may belong to,
as found by analyzing insn's expression. */
+
+ static int haifa_luid_for_non_insn (rtx x);
+ + /* Haifa version of sched_info hooks common to all headers. */
+ const struct common_sched_info_def haifa_common_sched_info = + {
+ NULL, /* fix_recovery_cfg */
+ NULL, /* add_block */
+ NULL, /* estimate_number_of_insns */
+ haifa_luid_for_non_insn, /* luid_for_non_insn */
+ SCHED_PASS_UNKNOWN /* sched_pass_id */
+ };
+ + const struct sched_scan_info_def *sched_scan_info;
+ + /* Mapping from instruction UID to its Logical UID. */
+ VEC (int, heap) *sched_luids = NULL;
+ + /* Next LUID to assign to an instruction. */
+ int sched_max_luid = 1;
+ + /* Haifa Instruction Data. */
+ VEC (haifa_insn_data_def, heap) *h_i_d = NULL;
+ + void (* sched_init_only_bb) (basic_block, basic_block);
+ + /* Split block function. Different schedulers might use different functions
+ to handle their internal data consistent. */
+ basic_block (* sched_split_block) (basic_block, rtx);
+ + /* Create empty basic block after the specified block. */
+ basic_block (* sched_create_empty_bb) (basic_block);
+ static int
may_trap_exp (const_rtx x, int is_store)
{
*************** haifa_classify_insn (const_rtx insn)
*** 478,487 ****
return haifa_classify_rtx (PATTERN (insn));
}
- - /* A typedef for rtx vector. */
- typedef VEC(rtx, heap) *rtx_vec_t;
- /* Forward declarations. */
static int priority (rtx);
--- 500,505 ----
*************** static void swap_sort (rtx *, int);
*** 490,499 ****
static void queue_insn (rtx, int);
static int schedule_insn (rtx);
static int find_set_reg_weight (const_rtx);
! static void find_insn_reg_weight (basic_block);
! static void find_insn_reg_weight1 (rtx);
static void adjust_priority (rtx);
static void advance_one_cycle (void);
/* Notes handling mechanism:
=========================
--- 508,519 ----
static void queue_insn (rtx, int);
static int schedule_insn (rtx);
static int find_set_reg_weight (const_rtx);
! static void find_insn_reg_weight (const_rtx);
static void adjust_priority (rtx);
static void advance_one_cycle (void);
+ static void extend_h_i_d (void);
+ static dw_t dep_weak (ds_t);
+ /* Notes handling mechanism:
=========================
*************** static void advance_one_cycle (void);
*** 511,522 ****
unlink_other_notes ()). After scheduling the block, these notes are
inserted at the beginning of the block (in schedule_block()). */
- static rtx unlink_other_notes (rtx, rtx);
- static void reemit_notes (rtx);
- - static rtx *ready_lastpos (struct ready_list *);
static void ready_add (struct ready_list *, rtx, bool);
- static void ready_sort (struct ready_list *);
static rtx ready_remove_first (struct ready_list *);
static void queue_to_ready (struct ready_list *);
--- 531,537 ----
*************** static int early_queue_to_ready (state_t
*** 524,537 ****
static void debug_ready_list (struct ready_list *);
- static void move_insn (rtx);
- /* The following functions are used to implement multi-pass scheduling
on the first cycle. */
- static rtx ready_element (struct ready_list *, int);
static rtx ready_remove (struct ready_list *, int);
static void ready_remove_insn (rtx);
- static int max_issue (struct ready_list *, int *, int);
static int choose_ready (struct ready_list *, rtx *);
--- 539,548 ----
*************** static void change_queue_index (rtx, int
*** 542,567 ****
/* The following functions are used to implement scheduling of data/control
speculative instructions. */
- static void extend_h_i_d (void);
- static void extend_ready (int);
- static void extend_global (rtx);
- static void extend_all (rtx);
- static void init_h_i_d (rtx);
static void generate_recovery_code (rtx);
static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
static void begin_speculative_block (rtx);
static void add_to_speculative_block (rtx);
! static dw_t dep_weak (ds_t);
! static edge find_fallthru_edge (basic_block);
! static void init_before_recovery (void);
! static basic_block create_recovery_block (void);
static void create_check_block_twin (rtx, bool);
static void fix_recovery_deps (basic_block);
! static void change_pattern (rtx, rtx);
! static int speculate_insn (rtx, ds_t, rtx *);
static void dump_new_block_header (int, basic_block, rtx, rtx);
static void restore_bb_notes (basic_block);
- static void extend_bb (void);
static void fix_jump_move (rtx);
static void move_block_after_check (rtx);
static void move_succs (VEC(edge,gc) **, basic_block);
--- 553,568 ----
/* The following functions are used to implement scheduling of data/control
speculative instructions. */
static void generate_recovery_code (rtx);
static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
static void begin_speculative_block (rtx);
static void add_to_speculative_block (rtx);
! static void init_before_recovery (basic_block *);
static void create_check_block_twin (rtx, bool);
static void fix_recovery_deps (basic_block);
! static void haifa_change_pattern (rtx, rtx);
static void dump_new_block_header (int, basic_block, rtx, rtx);
static void restore_bb_notes (basic_block);
static void fix_jump_move (rtx);
static void move_block_after_check (rtx);
static void move_succs (VEC(edge,gc) **, basic_block);
*************** static void sched_remove_insn (rtx);
*** 569,574 ****
--- 570,576 ----
static void clear_priorities (rtx, rtx_vec_t *);
static void calc_priorities (rtx_vec_t);
static void add_jump_dependencies (rtx, rtx);
+ static void sched_extend_bb (void);
#ifdef ENABLE_CHECKING
static int has_edge_p (VEC(edge,gc) *, int);
static void check_cfg (rtx, rtx);
*************** static void check_cfg (rtx, rtx);
*** 577,583 ****
#endif /* INSN_SCHEDULING */

/* Point to state used for the current scheduling pass. */
! struct sched_info *current_sched_info;

#ifndef INSN_SCHEDULING
void
--- 579,585 ----
#endif /* INSN_SCHEDULING */

/* Point to state used for the current scheduling pass. */
! struct haifa_sched_info *current_sched_info;

#ifndef INSN_SCHEDULING
void
*************** schedule_insns (void)
*** 586,594 ****
}
#else
- /* Working copy of frontend's sched_info variable. */
- static struct sched_info current_sched_info_var;
- /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
so that insns independent of the last scheduled insn will be preferred
over dependent instructions. */
--- 588,593 ----
*************** static rtx last_scheduled_insn;
*** 597,603 ****
/* Cached cost of the instruction. Use below function to get cost of the
insn. -1 here means that the field is not initialized. */
! #define INSN_COST(INSN) (h_i_d[INSN_UID (INSN)].cost)
/* Compute cost of executing INSN.
This is the number of cycles between instruction issue and
--- 596,602 ----
/* Cached cost of the instruction. Use below function to get cost of the
insn. -1 here means that the field is not initialized. */
! #define INSN_COST(INSN) (HID (INSN)->cost)
/* Compute cost of executing INSN.
This is the number of cycles between instruction issue and
*************** static rtx last_scheduled_insn;
*** 605,611 ****
HAIFA_INLINE int
insn_cost (rtx insn)
{
! int cost = INSN_COST (insn);
if (cost < 0)
{
--- 604,624 ----
HAIFA_INLINE int
insn_cost (rtx insn)
{
! int cost;
! ! if (sel_sched_p ())
! {
! if (recog_memoized (insn) < 0)
! return 0;
! ! cost = insn_default_latency (insn);
! if (cost < 0)
! cost = 0;
! ! return cost;
! }
! ! cost = INSN_COST (insn);
if (cost < 0)
{
*************** insn_cost (rtx insn)
*** 635,641 ****
This is the number of cycles between instruction issue and
instruction results. */
int
! dep_cost (dep_t link)
{
rtx used = DEP_CON (link);
int cost;
--- 648,654 ----
This is the number of cycles between instruction issue and
instruction results. */
int
! dep_cost_1 (dep_t link, dw_t dw)
{
rtx used = DEP_CON (link);
int cost;
*************** dep_cost (dep_t link)
*** 666,673 ****
else if (bypass_p (insn))
cost = insn_latency (insn, used);
}
! if (targetm.sched.adjust_cost != NULL)
{
/* This variable is used for backward compatibility with the
targets. */
--- 679,692 ----
else if (bypass_p (insn))
cost = insn_latency (insn, used);
}
+
! if (targetm.sched.adjust_cost_2)
! {
! cost = targetm.sched.adjust_cost_2 (used, (int) dep_type, insn, cost,
! dw);
! }
! else if (targetm.sched.adjust_cost != NULL)
{
/* This variable is used for backward compatibility with the
targets. */
*************** dep_cost (dep_t link)
*** 693,698 ****
--- 712,726 ----
return cost;
}
+ /* Compute cost of dependence LINK.
+ This is the number of cycles between instruction issue and
+ instruction results. */
+ int
+ dep_cost (dep_t link)
+ {
+ return dep_cost_1 (link, 0);
+ }
+ /* Return 'true' if DEP should be included in priority calculations. */
static bool
contributes_to_priority_p (dep_t dep)
*************** contributes_to_priority_p (dep_t dep)
*** 708,714 ****
their producers will increase, and, thus, the
producers will more likely be scheduled, thus,
resolving the dependence. */
! if ((current_sched_info->flags & DO_SPECULATION)
&& !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
&& (DEP_STATUS (dep) & SPECULATIVE))
return false;
--- 736,742 ----
their producers will increase, and, thus, the
producers will more likely be scheduled, thus,
resolving the dependence. */
! if (sched_deps_info->generate_spec_deps
&& !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
&& (DEP_STATUS (dep) & SPECULATIVE))
return false;
*************** priority (rtx insn)
*** 728,734 ****
if (!INSN_PRIORITY_KNOWN (insn))
{
! int this_priority = 0;
if (sd_lists_empty_p (insn, SD_LIST_FORW))
/* ??? We should set INSN_PRIORITY to insn_cost when and insn has
--- 756,762 ----
if (!INSN_PRIORITY_KNOWN (insn))
{
! int this_priority = -1;
if (sd_lists_empty_p (insn, SD_LIST_FORW))
/* ??? We should set INSN_PRIORITY to insn_cost when and insn has
*************** priority (rtx insn)
*** 747,753 ****
INSN_FORW_DEPS list of each instruction in the corresponding
recovery block. */ ! rec = RECOVERY_BLOCK (insn);
if (!rec || rec == EXIT_BLOCK_PTR)
{
prev_first = PREV_INSN (insn);
--- 775,782 ----
INSN_FORW_DEPS list of each instruction in the corresponding
recovery block. */ ! /* Selective scheduling does not define RECOVERY_BLOCK macro. */
! rec = sel_sched_p () ? NULL : RECOVERY_BLOCK (insn);
if (!rec || rec == EXIT_BLOCK_PTR)
{
prev_first = PREV_INSN (insn);
*************** priority (rtx insn)
*** 800,805 ****
--- 829,842 ----
}
while (twin != prev_first);
}
+ + if (this_priority < 0)
+ {
+ gcc_assert (this_priority == -1);
+ + this_priority = insn_cost (insn);
+ }
+ INSN_PRIORITY (insn) = this_priority;
INSN_PRIORITY_STATUS (insn) = 1;
}
*************** rank_for_schedule (const void *x, const *** 851,863 ****
ds1 = TODO_SPEC (tmp) & SPECULATIVE;
if (ds1)
! dw1 = dep_weak (ds1);
else
dw1 = NO_DEP_WEAK;
ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
if (ds2)
! dw2 = dep_weak (ds2);
else
dw2 = NO_DEP_WEAK;
--- 888,900 ----
ds1 = TODO_SPEC (tmp) & SPECULATIVE;
if (ds1)
! dw1 = ds_weak (ds1);
else
dw1 = NO_DEP_WEAK;
ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
if (ds2)
! dw2 = ds_weak (ds2);
else
dw2 = NO_DEP_WEAK;
*************** queue_insn (rtx insn, int n_cycles)
*** 964,970 ****
fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
}
! QUEUE_INDEX (insn) = next_q;
}
--- 1001,1007 ----
fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
}
! QUEUE_INDEX (insn) = next_q;
}
*************** queue_remove (rtx insn)
*** 981,987 ****
/* Return a pointer to the bottom of the ready list, i.e. the insn
with the lowest priority. */
! HAIFA_INLINE static rtx *
ready_lastpos (struct ready_list *ready)
{
gcc_assert (ready->n_ready >= 1);
--- 1018,1024 ----
/* Return a pointer to the bottom of the ready list, i.e. the insn
with the lowest priority. */
! rtx *
ready_lastpos (struct ready_list *ready)
{
gcc_assert (ready->n_ready >= 1);
*************** ready_remove_first (struct ready_list *r
*** 1054,1060 ****
insn with the highest priority is 0, and the lowest priority has
N_READY - 1. */
! HAIFA_INLINE static rtx
ready_element (struct ready_list *ready, int index)
{
gcc_assert (ready->n_ready && index < ready->n_ready);
--- 1091,1097 ----
insn with the highest priority is 0, and the lowest priority has
N_READY - 1. */
! rtx
ready_element (struct ready_list *ready, int index)
{
gcc_assert (ready->n_ready && index < ready->n_ready);
*************** ready_remove_insn (rtx insn)
*** 1101,1107 ****
/* Sort the ready list READY by ascending priority, using the SCHED_SORT
macro. */
! HAIFA_INLINE static void
ready_sort (struct ready_list *ready)
{
rtx *first = ready_lastpos (ready);
--- 1138,1144 ----
/* Sort the ready list READY by ascending priority, using the SCHED_SORT
macro. */
! void
ready_sort (struct ready_list *ready)
{
rtx *first = ready_lastpos (ready);
*************** adjust_priority (rtx prev)
*** 1127,1153 ****
targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
}
! /* Advance time on one cycle. */
! HAIFA_INLINE static void
! advance_one_cycle (void)
{
if (targetm.sched.dfa_pre_advance_cycle)
targetm.sched.dfa_pre_advance_cycle ();
if (targetm.sched.dfa_pre_cycle_insn)
! state_transition (curr_state,
targetm.sched.dfa_pre_cycle_insn ());
! state_transition (curr_state, NULL);
if (targetm.sched.dfa_post_cycle_insn)
! state_transition (curr_state,
targetm.sched.dfa_post_cycle_insn ());
if (targetm.sched.dfa_post_advance_cycle)
targetm.sched.dfa_post_advance_cycle ();
}
/* Clock at which the previous instruction was issued. */
static int last_clock_var;
--- 1164,1199 ----
targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
}
! /* Advance DFA state STATE on one cycle. */
! void
! advance_state (state_t state)
{
if (targetm.sched.dfa_pre_advance_cycle)
targetm.sched.dfa_pre_advance_cycle ();
if (targetm.sched.dfa_pre_cycle_insn)
! state_transition (state,
targetm.sched.dfa_pre_cycle_insn ());
! state_transition (state, NULL);
if (targetm.sched.dfa_post_cycle_insn)
! state_transition (state,
targetm.sched.dfa_post_cycle_insn ());
if (targetm.sched.dfa_post_advance_cycle)
targetm.sched.dfa_post_advance_cycle ();
}
+ /* Advance time on one cycle. */
+ HAIFA_INLINE static void
+ advance_one_cycle (void)
+ {
+ advance_state (curr_state);
+ if (sched_verbose >= 6)
+ fprintf (sched_dump, "\n;;\tAdvanced a state.\n");
+ }
+ /* Clock at which the previous instruction was issued. */
static int last_clock_var;
*************** schedule_insn (rtx insn)
*** 1258,1267 ****
/* Functions for handling of notes. */
/* Delete notes beginning with INSN and put them in the chain
of notes ended by NOTE_LIST.
Returns the insn following the notes. */
- static rtx
unlink_other_notes (rtx insn, rtx tail)
{
--- 1304,1348 ----
/* Functions for handling of notes. */
+ /* Insert the INSN note at the end of the notes list. */
+ static void + add_to_note_list (rtx insn, rtx *note_list_end_p)
+ {
+ PREV_INSN (insn) = *note_list_end_p;
+ if (*note_list_end_p)
+ NEXT_INSN (*note_list_end_p) = insn;
+ *note_list_end_p = insn;
+ }
+ + /* Add note list that ends on FROM_END to the end of TO_ENDP. */
+ void
+ concat_note_lists (rtx from_end, rtx *to_endp)
+ {
+ rtx from_start;
+ + if (from_end == NULL)
+ /* It's easy when have nothing to concat. */
+ return;
+ + if (*to_endp == NULL)
+ /* It's also easy when destination is empty. */
+ {
+ *to_endp = from_end;
+ return;
+ }
+ + from_start = from_end;
+ /* A note list should be traversed via PREV_INSN. */
+ while (PREV_INSN (from_start) != NULL) + from_start = PREV_INSN (from_start);
+ + add_to_note_list (from_start, to_endp);
+ *to_endp = from_end;
+ }
+ /* Delete notes beginning with INSN and put them in the chain
of notes ended by NOTE_LIST.
Returns the insn following the notes. */
static rtx
unlink_other_notes (rtx insn, rtx tail)
{
*************** unlink_other_notes (rtx insn, rtx tail)
*** 1292,1313 ****
/* See sched_analyze to see how these are handled. */
if (NOTE_KIND (insn) != NOTE_INSN_EH_REGION_BEG
&& NOTE_KIND (insn) != NOTE_INSN_EH_REGION_END)
! {
! /* Insert the note at the end of the notes list. */
! PREV_INSN (insn) = note_list;
! if (note_list)
! NEXT_INSN (note_list) = insn;
! note_list = insn;
! }
insn = next;
}
return insn;
}
/* Return the head and tail pointers of ebb starting at BEG and ending
at END. */
- void
get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
{
--- 1373,1394 ----
/* See sched_analyze to see how these are handled. */
if (NOTE_KIND (insn) != NOTE_INSN_EH_REGION_BEG
&& NOTE_KIND (insn) != NOTE_INSN_EH_REGION_END)
! add_to_note_list (insn, &note_list);
insn = next;
}
+ + if (insn == tail)
+ {
+ gcc_assert (sel_sched_p ());
+ return prev;
+ }
+ return insn;
}
/* Return the head and tail pointers of ebb starting at BEG and ending
at END. */
void
get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
{
*************** no_real_insns_p (const_rtx head, const_r
*** 1360,1367 ****
/* Delete notes between HEAD and TAIL and put them in the chain
of notes ended by NOTE_LIST. */
! ! void
rm_other_notes (rtx head, rtx tail)
{
rtx next_tail;
--- 1441,1447 ----
/* Delete notes between HEAD and TAIL and put them in the chain
of notes ended by NOTE_LIST. */
! static void
rm_other_notes (rtx head, rtx tail)
{
rtx next_tail;
*************** rm_other_notes (rtx head, rtx tail)
*** 1382,1401 ****
if (NOTE_NOT_BB_P (insn))
{
prev = insn;
- insn = unlink_other_notes (insn, next_tail);
! gcc_assert (prev != tail && prev != head && insn != next_tail);
}
}
}
/* Functions for computation of registers live/usage info. */
/* This function looks for a new register being defined.
If the destination register is already used by the source,
a new register is not needed. */
- static int
find_set_reg_weight (const_rtx x)
{
--- 1462,1541 ----
if (NOTE_NOT_BB_P (insn))
{
prev = insn;
insn = unlink_other_notes (insn, next_tail);
! gcc_assert ((sel_sched_p ()
! || prev != tail) && prev != head && insn != next_tail);
}
}
}
+ /* Same as above, but also process REG_SAVE_NOTEs of HEAD. */
+ void
+ remove_notes (rtx head, rtx tail)
+ {
+ /* rm_other_notes only removes notes which are _inside_ the
+ block---that is, it won't remove notes before the first real insn
+ or after the last real insn of the block. So if the first insn
+ has a REG_SAVE_NOTE which would otherwise be emitted before the
+ insn, it is redundant with the note before the start of the
+ block, and so we have to take it out. */
+ if (INSN_P (head))
+ {
+ rtx note;
+ + for (note = REG_NOTES (head); note; note = XEXP (note, 1))
+ if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
+ remove_note (head, note);
+ }
+ + /* Remove remaining note insns from the block, save them in
+ note_list. These notes are restored at the end of
+ schedule_block (). */
+ rm_other_notes (head, tail);
+ }
+ + /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
+ previously found among the insns. Insert them just before HEAD. */
+ rtx
+ restore_other_notes (rtx head, basic_block head_bb)
+ {
+ if (note_list != 0)
+ {
+ rtx note_head = note_list;
+ + if (head)
+ head_bb = BLOCK_FOR_INSN (head);
+ else
+ head = NEXT_INSN (bb_note (head_bb));
+ + while (PREV_INSN (note_head))
+ {
+ set_block_for_insn (note_head, head_bb);
+ note_head = PREV_INSN (note_head);
+ }
+ /* In the above cycle we've missed this note. */
+ set_block_for_insn (note_head, head_bb);
+ + PREV_INSN (note_head) = PREV_INSN (head);
+ NEXT_INSN (PREV_INSN (head)) = note_head;
+ PREV_INSN (head) = note_list;
+ NEXT_INSN (note_list) = head;
+ + if (BLOCK_FOR_INSN (head) != head_bb)
+ BB_END (head_bb) = note_list;
+ + head = note_head;
+ }
+ + return head;
+ }
+ /* Functions for computation of registers live/usage info. */
/* This function looks for a new register being defined.
If the destination register is already used by the source,
a new register is not needed. */
static int
find_set_reg_weight (const_rtx x)
{
*************** find_set_reg_weight (const_rtx x)
*** 1417,1441 ****
return 0;
}
! /* Calculate INSN_REG_WEIGHT for all insns of a block. */
! ! static void
! find_insn_reg_weight (basic_block bb)
! {
! rtx insn, next_tail, head, tail;
! ! get_ebb_head_tail (bb, bb, &head, &tail);
! next_tail = NEXT_INSN (tail);
! ! for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
! find_insn_reg_weight1 (insn); ! }
! ! /* Calculate INSN_REG_WEIGHT for single instruction.
! Separated from find_insn_reg_weight because of need
! to initialize new instruction in generate_recovery_code. */
static void
! find_insn_reg_weight1 (rtx insn)
{
int reg_weight = 0;
rtx x;
--- 1557,1565 ----
return 0;
}
! /* Calculate INSN_REG_WEIGHT for INSN. */
static void
! find_insn_reg_weight (const_rtx insn)
{
int reg_weight = 0;
rtx x;
*************** debug_ready_list (struct ready_list *rea
*** 1741,1748 ****
NOTEs. The REG_SAVE_NOTE note following first one is contains the
saved value for NOTE_BLOCK_NUMBER which is useful for
NOTE_INSN_EH_REGION_{BEG,END} NOTEs. */
! ! static void
reemit_notes (rtx insn)
{
rtx note, last = insn;
--- 1865,1871 ----
NOTEs. The REG_SAVE_NOTE note following first one is contains the
saved value for NOTE_BLOCK_NUMBER which is useful for
NOTE_INSN_EH_REGION_{BEG,END} NOTEs. */
! void
reemit_notes (rtx insn)
{
rtx note, last = insn;
*************** reemit_notes (rtx insn)
*** 1760,1770 ****
}
/* Move INSN. Reemit notes if needed. Update CFG, if needed. */
! static void
! move_insn (rtx insn)
{
- rtx last = last_scheduled_insn;
- if (PREV_INSN (insn) != last)
{
basic_block bb;
--- 1883,1891 ----
}
/* Move INSN. Reemit notes if needed. Update CFG, if needed. */
! void
! move_insn (rtx insn, rtx last, rtx nt)
{
if (PREV_INSN (insn) != last)
{
basic_block bb;
*************** move_insn (rtx insn)
*** 1784,1792 ****
jump_p = control_flow_insn_p (insn);
gcc_assert (!jump_p
! || ((current_sched_info->flags & SCHED_RGN)
&& IS_SPECULATION_BRANCHY_CHECK_P (insn))
! || (current_sched_info->flags & SCHED_EBB));
gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);
--- 1905,1914 ----
jump_p = control_flow_insn_p (insn);
gcc_assert (!jump_p
! || ((common_sched_info->sched_pass_id == SCHED_RGN_PASS)
&& IS_SPECULATION_BRANCHY_CHECK_P (insn))
! || (common_sched_info->sched_pass_id
! == SCHED_EBB_PASS));
gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);

There is no log entry for this change.


*************** move_insn (rtx insn)
*** 1798,1805 ****
if (jump_p)
/* We move the block note along with jump. */
{
! /* NT is needed for assertion below. */
! rtx nt = current_sched_info->next_tail;
note = NEXT_INSN (insn);
while (NOTE_NOT_BB_P (note) && note != nt)
--- 1920,1926 ----
if (jump_p)
/* We move the block note along with jump. */
{
! gcc_assert (nt);
note = NEXT_INSN (insn);
while (NOTE_NOT_BB_P (note) && note != nt)
*************** move_insn (rtx insn)
*** 1842,1849 ****
if (BB_END (bb) == last)
BB_END (bb) = insn; }
- - reemit_notes (insn);
SCHED_GROUP_P (insn) = 0; }
--- 1963,1968 ----
*************** static struct choice_entry *choice_stack
*** 1868,1874 ****
/* The following variable value is number of essential insns issued on
the current cycle. An insn is essential one if it changes the
processors state. */
! static int cycle_issued_insns;
/* The following variable value is maximal number of tries of issuing
insns for the first cycle multipass insn scheduling. We define
--- 1987,1996 ----
/* The following variable value is number of essential insns issued on
the current cycle. An insn is essential one if it changes the
processors state. */
! int cycle_issued_insns;
! ! /* This holds the value of the target dfa_lookahead hook. */
! int dfa_lookahead;
/* The following variable value is maximal number of tries of issuing
insns for the first cycle multipass insn scheduling. We define
*************** static int cached_issue_rate = 0;
*** 1899,1941 ****
of all instructions in READY. The function stops immediately,
if it reached the such a solution, that all instruction can be issued.
INDEX will contain index of the best insn in READY. The following
! function is used only for first cycle multipass scheduling. */
! static int
! max_issue (struct ready_list *ready, int *index, int max_points)
{
! int n, i, all, n_ready, best, delay, tries_num, points = -1;
struct choice_entry *top;
rtx insn;
best = 0;
! memcpy (choice_stack->state, curr_state, dfa_state_size);
top = choice_stack;
! top->rest = cached_first_cycle_multipass_dfa_lookahead;
top->n = 0;
! n_ready = ready->n_ready;
for (all = i = 0; i < n_ready; i++)
if (!ready_try [i])
all++;
i = 0;
tries_num = 0;
for (;;)
{
! if (top->rest == 0 || i >= n_ready)
{
if (top == choice_stack)
break;
! if (best < top - choice_stack && ready_try [0])
{
! best = top - choice_stack;
! *index = choice_stack [1].index;
! points = top->n;
! if (top->n == max_points || best == all)
! break;
}
i = top->index;
ready_try [i] = 0;
top--;
! memcpy (curr_state, top->state, dfa_state_size);
}
else if (!ready_try [i])
{
--- 2021,2133 ----
of all instructions in READY. The function stops immediately,
if it reached the such a solution, that all instruction can be issued.
INDEX will contain index of the best insn in READY. The following
! function is used only for first cycle multipass scheduling.
! ! PRIVILEGED_N >= 0
! ! This function expects recognized insns only. All USEs,
! CLOBBERs, etc must be filtered elsewhere. */
! int
! max_issue (struct ready_list *ready, int privileged_n, state_t state,
! int *index)
{
! int n, i, all, n_ready, best, delay, tries_num, points = -1, max_points;
! int more_issue;
struct choice_entry *top;
rtx insn;
+ n_ready = ready->n_ready;
+ gcc_assert (dfa_lookahead >= 1 && privileged_n >= 0
+ && privileged_n <= n_ready);
+ + /* Init MAX_LOOKAHEAD_TRIES. */
+ if (cached_first_cycle_multipass_dfa_lookahead != dfa_lookahead)
+ {
+ cached_first_cycle_multipass_dfa_lookahead = dfa_lookahead;
+ max_lookahead_tries = 100;
+ for (i = 0; i < issue_rate; i++)
+ max_lookahead_tries *= dfa_lookahead;
+ }
+ + /* Init max_points. */
+ max_points = 0;
+ more_issue = issue_rate - cycle_issued_insns;
+ gcc_assert (more_issue >= 0);
+ + for (i = 0; i < n_ready; i++)
+ if (!ready_try [i])
+ {
+ if (more_issue-- > 0)
+ max_points += ISSUE_POINTS (ready_element (ready, i));
+ else
+ break;
+ }
+ + /* The number of the issued insns in the best solution. */
best = 0;
! top = choice_stack;
! ! /* Set initial state of the search. */
! memcpy (top->state, state, dfa_state_size);
! top->rest = dfa_lookahead;
top->n = 0;
! ! /* Count the number of the insns to search among. */
for (all = i = 0; i < n_ready; i++)
if (!ready_try [i])
all++;
+ + /* I is the index of the insn to try next. */
i = 0;
tries_num = 0;
for (;;)
{
! if (/* If we've reached a dead end or searched enough of what we have
! been asked... */
! top->rest == 0
! /* Or have nothing else to try. */
! || i >= n_ready)
{
+ /* ??? (... || i == n_ready). */
+ gcc_assert (i <= n_ready);
+ if (top == choice_stack)
break;
! ! if (best < top - choice_stack)
{
! if (privileged_n)
! {
! n = privileged_n;
! /* Try to find issued privileged insn. */
! while (n && !ready_try[--n]);
! }
! ! if (/* If all insns are equally good... */
! privileged_n == 0
! /* Or a privileged insn will be issued. */
! || ready_try[n])
! /* Then we have a solution. */
! {
! best = top - choice_stack;
! /* This is the index of the insn issued first in this
! solution. */
! *index = choice_stack [1].index;
! points = top->n;
! if (top->n == max_points || best == all)
! break;
! }
}
+ + /* Set ready-list index to point to the last insn
+ ('i++' below will advance it to the next insn). */
i = top->index;
+ + /* Backtrack. */
ready_try [i] = 0;
top--;
! memcpy (state, top->state, dfa_state_size);
}
else if (!ready_try [i])
{
*************** max_issue (struct ready_list *ready, int
*** 1943,1987 ****
if (tries_num > max_lookahead_tries)
break;
insn = ready_element (ready, i);
! delay = state_transition (curr_state, insn);
if (delay < 0)
{
! if (state_dead_lock_p (curr_state))
top->rest = 0;
else
top->rest--;
n = top->n;
! if (memcmp (top->state, curr_state, dfa_state_size) != 0)
n += ISSUE_POINTS (insn);
top++;
! top->rest = cached_first_cycle_multipass_dfa_lookahead;
top->index = i;
top->n = n;
! memcpy (top->state, curr_state, dfa_state_size);
ready_try [i] = 1;
i = -1;
}
}
i++;
}
- while (top != choice_stack)
- {
- ready_try [top->index] = 0;
- top--;
- }
- memcpy (curr_state, choice_stack->state, dfa_state_size); ! if (sched_verbose >= 4) ! fprintf (sched_dump, ";;\t\tChoosed insn : %s; points: %d/%d\n",
! (*current_sched_info->print_insn) (ready_element (ready, *index),
! 0), ! points, max_points);
! return best;
}
/* The following function chooses insn from READY and modifies
! *N_READY and READY. The following function is used only for first
cycle multipass scheduling.
Return:
-1 if cycle should be advanced,
--- 2135,2177 ----
if (tries_num > max_lookahead_tries)
break;
insn = ready_element (ready, i);
! delay = state_transition (state, insn);
if (delay < 0)
{
! if (state_dead_lock_p (state))
top->rest = 0;
else
top->rest--;
+ n = top->n;
! if (memcmp (top->state, state, dfa_state_size) != 0)
n += ISSUE_POINTS (insn);
+ + /* Advance to the next choice_entry. */
top++;
! /* Initialize it. */
! top->rest = dfa_lookahead;
top->index = i;
top->n = n;
! memcpy (top->state, state, dfa_state_size);
! ready_try [i] = 1;
i = -1;
}
}
+ + /* Increase ready-list index. */
i++;
}
! /* Restore the original state of the DFA. */
! memcpy (state, choice_stack->state, dfa_state_size); ! return best;
}
/* The following function chooses insn from READY and modifies
! READY. The following function is used only for first
cycle multipass scheduling.
Return:
-1 if cycle should be advanced,
*************** choose_ready (struct ready_list *ready, *** 2024,2038 ****
/* Try to choose the better insn. */
int index = 0, i, n;
rtx insn;
! int more_issue, max_points, try_data = 1, try_control = 1;
- if (cached_first_cycle_multipass_dfa_lookahead != lookahead)
- {
- cached_first_cycle_multipass_dfa_lookahead = lookahead;
- max_lookahead_tries = 100;
- for (i = 0; i < issue_rate; i++)
- max_lookahead_tries *= lookahead;
- }
insn = ready_element (ready, 0);
if (INSN_CODE (insn) < 0)
{
--- 2214,2222 ----
/* Try to choose the better insn. */
int index = 0, i, n;
rtx insn;
! int try_data = 1, try_control = 1;
! ds_t ts;
insn = ready_element (ready, 0);
if (INSN_CODE (insn) < 0)
{
*************** choose_ready (struct ready_list *ready, *** 2071,2081 ****
}
}
! if ((!try_data && (TODO_SPEC (insn) & DATA_SPEC))
! || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC))
! || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
! && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
! (insn)))
/* Discard speculative instruction that stands first in the ready
list. */
{
--- 2255,2267 ----
}
}
! ts = TODO_SPEC (insn);
! if ((ts & SPECULATIVE)
! && (((!try_data && (ts & DATA_SPEC))
! || (!try_control && (ts & CONTROL_SPEC)))
! || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
! && !targetm.sched
! .first_cycle_multipass_dfa_lookahead_guard_spec (insn))))
/* Discard speculative instruction that stands first in the ready
list. */
{
*************** choose_ready (struct ready_list *ready, *** 2083,2113 ****
return 1;
}
! max_points = ISSUE_POINTS (insn);
! more_issue = issue_rate - cycle_issued_insns - 1;
for (i = 1; i < ready->n_ready; i++)
{
insn = ready_element (ready, i);
- ready_try [i]
- = (INSN_CODE (insn) < 0
- || (!try_data && (TODO_SPEC (insn) & DATA_SPEC))
- || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC))
- || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
- && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard
- (insn)));
! if (!ready_try [i] && more_issue-- > 0)
! max_points += ISSUE_POINTS (insn);
}
! if (max_issue (ready, &index, max_points) == 0)
{
*insn_ptr = ready_remove_first (ready);
return 0;
}
else
{
*insn_ptr = ready_remove (ready, index);
return 0;
}
--- 2269,2317 ----
return 1;
}
! ready_try[0] = 0;
for (i = 1; i < ready->n_ready; i++)
{
insn = ready_element (ready, i);
! ready_try [i]
! = ((!try_data && (TODO_SPEC (insn) & DATA_SPEC))
! || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC)));
}
! /* Let the target filter the search space. */
! for (i = 1; i < ready->n_ready; i++)
! if (!ready_try[i])
! {
! insn = ready_element (ready, i);
! ! gcc_assert (INSN_CODE (insn) >= 0
! || recog_memoized (insn) < 0);
! ! ready_try [i]
! = (/* INSN_CODE check can be omitted here as it is also done later
! in max_issue (). */
! INSN_CODE (insn) < 0
! || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
! && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard
! (insn)));
! }
! ! if (max_issue (ready, 1, curr_state, &index) == 0)
{
+ if (sched_verbose >= 4)
+ fprintf (sched_dump, ";;\t\tChosen none\n");
*insn_ptr = ready_remove_first (ready);
return 0;
}
else
{
+ if (sched_verbose >= 4) + fprintf (sched_dump, ";;\t\tChosen insn : %s\n",
+ (*current_sched_info->print_insn)
+ (ready_element (ready, index), 0));
+ *insn_ptr = ready_remove (ready, index);
return 0;
}
*************** choose_ready (struct ready_list *ready, *** 2119,2127 ****
region. */
void
! schedule_block (basic_block *target_bb, int rgn_n_insns1)
{
- struct ready_list ready;
int i, first_cycle_insn_p;
int can_issue_more;
state_t temp_state = NULL; /* It is used for multipass scheduling. */
--- 2323,2330 ----
region. */
void
! schedule_block (basic_block *target_bb)
{
int i, first_cycle_insn_p;
int can_issue_more;
state_t temp_state = NULL; /* It is used for multipass scheduling. */
*************** schedule_block (basic_block *target_bb, *** 2150,2164 ****
state_reset (curr_state);
! /* Allocate the ready list. */
! readyp = &ready;
! ready.vec = NULL;
! ready_try = NULL;
! choice_stack = NULL;
! ! rgn_n_insns = -1;
! extend_ready (rgn_n_insns1 + 1);
! ready.first = ready.veclen - 1;
ready.n_ready = 0;
--- 2353,2359 ----
state_reset (curr_state);
! /* Clear the ready list. */
ready.first = ready.veclen - 1;
ready.n_ready = 0;
*************** schedule_block (basic_block *target_bb, *** 2445,2451 ****
(*current_sched_info->begin_schedule_ready) (insn,
last_scheduled_insn);
! move_insn (insn);
last_scheduled_insn = insn;
if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
--- 2640,2647 ----
(*current_sched_info->begin_schedule_ready) (insn,
last_scheduled_insn);
! move_insn (insn, last_scheduled_insn, current_sched_info->next_tail);
! reemit_notes (insn);
last_scheduled_insn = insn;
if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
*************** schedule_block (basic_block *target_bb, *** 2532,2537 ****
--- 2728,2736 ----
}
}
+ if (sched_verbose)
+ fprintf (sched_dump, ";; total time = %d\n", clock_var);
+ if (!current_sched_info->queue_must_finish_empty
|| haifa_recovery_bb_recently_added_p)
{
*************** schedule_block (basic_block *target_bb, *** 2547,2605 ****
if (targetm.sched.md_finish)
{
targetm.sched.md_finish (sched_dump, sched_verbose);
- /* Target might have added some instructions to the scheduled block.
in its md_finish () hook. These new insns don't have any data
initialized and to identify them we extend h_i_d so that they'll
! get zero luids.*/
! extend_h_i_d ();
}
/* Update head/tail boundaries. */
head = NEXT_INSN (prev_head);
tail = last_scheduled_insn;
! /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
! previously found among the insns. Insert them at the beginning
! of the insns. */
! if (note_list != 0)
! {
! basic_block head_bb = BLOCK_FOR_INSN (head);
! rtx note_head = note_list;
! ! while (PREV_INSN (note_head))
! {
! set_block_for_insn (note_head, head_bb);
! note_head = PREV_INSN (note_head);
! }
! /* In the above cycle we've missed this note: */
! set_block_for_insn (note_head, head_bb);
! ! PREV_INSN (note_head) = PREV_INSN (head);
! NEXT_INSN (PREV_INSN (head)) = note_head;
! PREV_INSN (head) = note_list;
! NEXT_INSN (note_list) = head;
! head = note_head;
! }
! ! /* Debugging. */
! if (sched_verbose)
! {
! fprintf (sched_dump, ";; total time = %d\n;; new head = %d\n",
! clock_var, INSN_UID (head));
! fprintf (sched_dump, ";; new tail = %d\n\n",
! INSN_UID (tail));
! }
current_sched_info->head = head;
current_sched_info->tail = tail;
- - free (ready.vec);
- - free (ready_try);
- for (i = 0; i <= rgn_n_insns; i++)
- free (choice_stack [i].state);
- free (choice_stack);
}

/* Set_priorities: compute priority of each insn in the block. */
--- 2746,2770 ----
if (targetm.sched.md_finish)
{
targetm.sched.md_finish (sched_dump, sched_verbose);
/* Target might have added some instructions to the scheduled block.
in its md_finish () hook. These new insns don't have any data
initialized and to identify them we extend h_i_d so that they'll
! get zero luids. */
! sched_init_luids (NULL, NULL, NULL, NULL);
}
+ if (sched_verbose)
+ fprintf (sched_dump, ";; new head = %d\n;; new tail = %d\n\n",
+ INSN_UID (head), INSN_UID (tail));
+ /* Update head/tail boundaries. */
head = NEXT_INSN (prev_head);
tail = last_scheduled_insn;
! head = restore_other_notes (head, NULL);
current_sched_info->head = head;
current_sched_info->tail = tail;
}

/* Set_priorities: compute priority of each insn in the block. */
*************** set_priorities (rtx head, rtx tail)
*** 2638,2685 ****
return n_insn;
}
! /* Next LUID to assign to an instruction. */
! static int luid;
! /* Initialize some global state for the scheduler. */
void
sched_init (void)
{
- basic_block b;
- rtx insn;
- int i;
- - /* Switch to working copy of sched_info. */
- memcpy (&current_sched_info_var, current_sched_info,
- sizeof (current_sched_info_var));
- current_sched_info = &current_sched_info_var;
- /* Disable speculative loads in their presence if cc0 defined. */
#ifdef HAVE_cc0
flag_schedule_speculative_load = 0;
#endif
- /* Set dump and sched_verbose for the desired debugging output. If no
- dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
- For -fsched-verbose=N, N>=10, print everything to stderr. */
- sched_verbose = sched_verbose_param;
- if (sched_verbose_param == 0 && dump_file)
- sched_verbose = 1;
- sched_dump = ((sched_verbose_param >= 10 || !dump_file)
- ? stderr : dump_file);
- /* Initialize SPEC_INFO. */
if (targetm.sched.set_sched_flags)
{
spec_info = &spec_info_var;
targetm.sched.set_sched_flags (spec_info);
! if (current_sched_info->flags & DO_SPECULATION)
! spec_info->weakness_cutoff =
! (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
else
/* So we won't read anything accidentally. */
! spec_info = 0;
}
else
/* So we won't read anything accidentally. */
--- 2803,2851 ----
return n_insn;
}
! /* Set dump and sched_verbose for the desired debugging output. If no
! dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
! For -fsched-verbose=N, N>=10, print everything to stderr. */
! void
! setup_sched_dump (void)
! {
! sched_verbose = sched_verbose_param;
! if (sched_verbose_param == 0 && dump_file)
! sched_verbose = 1;
! sched_dump = ((sched_verbose_param >= 10 || !dump_file)
! ? stderr : dump_file);
! }
! /* Initialize some global state for the scheduler. This function works ! with the common data shared between all the schedulers. It is called
! from the scheduler specific initialization routine. */
void
sched_init (void)
{
/* Disable speculative loads in their presence if cc0 defined. */
#ifdef HAVE_cc0
flag_schedule_speculative_load = 0;
#endif
/* Initialize SPEC_INFO. */
if (targetm.sched.set_sched_flags)
{
spec_info = &spec_info_var;
targetm.sched.set_sched_flags (spec_info);
! ! if (spec_info->mask != 0)
! {
! spec_info->data_weakness_cutoff =
! (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
! spec_info->control_weakness_cutoff =
! (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF)
! * REG_BR_PROB_BASE) / 100;
! }
else
/* So we won't read anything accidentally. */
! spec_info = NULL;
! }
else
/* So we won't read anything accidentally. */
*************** sched_init (void)
*** 2698,2715 ****
cached_first_cycle_multipass_dfa_lookahead = 0;
}
! old_max_uid = 0;
! h_i_d = 0;
! extend_h_i_d ();
! ! for (i = 0; i < old_max_uid; i++)
! {
! h_i_d[i].cost = -1;
! h_i_d[i].todo_spec = HARD_DEP;
! h_i_d[i].queue_index = QUEUE_NOWHERE;
! h_i_d[i].tick = INVALID_TICK;
! h_i_d[i].inter_tick = INVALID_TICK;
! }
if (targetm.sched.init_dfa_pre_cycle_insn)
targetm.sched.init_dfa_pre_cycle_insn ();
--- 2864,2873 ----
cached_first_cycle_multipass_dfa_lookahead = 0;
}
! if (targetm.sched.first_cycle_multipass_dfa_lookahead)
! dfa_lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
! else
! dfa_lookahead = 0;
if (targetm.sched.init_dfa_pre_cycle_insn)
targetm.sched.init_dfa_pre_cycle_insn ();
*************** sched_init (void)
*** 2719,2785 ****
dfa_start ();
dfa_state_size = state_size ();
- curr_state = xmalloc (dfa_state_size);
! h_i_d[0].luid = 0;
! luid = 1;
! FOR_EACH_BB (b)
! for (insn = BB_HEAD (b); ; insn = NEXT_INSN (insn))
! {
! INSN_LUID (insn) = luid;
! /* Increment the next luid, unless this is a note. We don't
! really need separate IDs for notes and we don't want to
! schedule differently depending on whether or not there are
! line-number notes, i.e., depending on whether or not we're
! generating debugging information. */
! if (!NOTE_P (insn))
! ++luid;
! if (insn == BB_END (b))
! break;
! }
! init_dependency_caches (luid);
! init_alias_analysis ();
! old_last_basic_block = 0;
! extend_bb ();
! /* Compute INSN_REG_WEIGHT for all blocks. We must do this before
! removing death notes. */
! FOR_EACH_BB_REVERSE (b)
! find_insn_reg_weight (b);
! if (targetm.sched.md_init_global)
! targetm.sched.md_init_global (sched_dump, sched_verbose, old_max_uid);
! nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
! before_recovery = 0;
haifa_recovery_bb_ever_added_p = false;
#ifdef ENABLE_CHECKING
! /* This is used preferably for finding bugs in check_cfg () itself. */
check_cfg (0, 0);
#endif
- }
! /* Free global data used during insn scheduling. */
void
! sched_finish (void)
{
! free (h_i_d);
! free (curr_state);
! dfa_finish ();
! free_dependency_caches ();
! end_alias_analysis ();
- if (targetm.sched.md_finish_global)
- targetm.sched.md_finish_global (sched_dump, sched_verbose);
- if (spec_info && spec_info->dump)
{
char c = reload_completed ? 'a' : 'b';
--- 2877,2969 ----
dfa_start ();
dfa_state_size = state_size ();
! init_alias_analysis ();
! df_set_flags (DF_LR_RUN_DCE);
! df_note_add_problem ();
! /* More problems needed for interloop dep calculation in SMS. */
! if (common_sched_info->sched_pass_id == SCHED_SMS_PASS)
! {
! df_rd_add_problem ();
! df_chain_add_problem (DF_DU_CHAIN + DF_UD_CHAIN);
! }
! df_analyze ();
! ! /* Do not run DCE after reload, as this can kill nops inserted ! by bundling. */
! if (reload_completed)
! df_clear_flags (DF_LR_RUN_DCE);
! regstat_compute_calls_crossed ();
! if (targetm.sched.md_init_global)
! targetm.sched.md_init_global (sched_dump, sched_verbose,
! get_max_uid () + 1);
! curr_state = xmalloc (dfa_state_size);
! }
! static void haifa_init_only_bb (basic_block, basic_block);
! /* Initialize data structures specific to the Haifa scheduler. */
! void
! haifa_sched_init (void)
! {
! setup_sched_dump ();
! sched_init ();
! ! if (spec_info != NULL)
! {
! sched_deps_info->use_deps_list = 1;
! sched_deps_info->generate_spec_deps = 1;
! }
+ /* Initialize luids, dependency caches, target and h_i_d for the
+ whole function. */
+ {
+ bb_vec_t bbs = VEC_alloc (basic_block, heap, n_basic_blocks);
+ basic_block bb;
+ + sched_init_bbs ();
+ + FOR_EACH_BB (bb)
+ VEC_quick_push (basic_block, bbs, bb);
+ sched_init_luids (bbs, NULL, NULL, NULL);
+ sched_deps_init (true);
+ sched_extend_target ();
+ haifa_init_h_i_d (bbs, NULL, NULL, NULL);
+ + VEC_free (basic_block, heap, bbs);
+ }
+ + sched_init_only_bb = haifa_init_only_bb;
+ sched_split_block = sched_split_block_1;
+ sched_create_empty_bb = sched_create_empty_bb_1;
haifa_recovery_bb_ever_added_p = false;
#ifdef ENABLE_CHECKING
! /* This is used preferably for finding bugs in check_cfg () itself.
! We must call sched_bbs_init () before check_cfg () because check_cfg ()
! assumes that the last insn in the last bb has a non-null successor. */
check_cfg (0, 0);
#endif
! nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
! before_recovery = 0;
! after_recovery = 0;
! }
+ /* Finish work with the data specific to the Haifa scheduler. */
void
! haifa_sched_finish (void)
{
! sched_create_empty_bb = NULL;
! sched_split_block = NULL;
! sched_init_only_bb = NULL;
if (spec_info && spec_info->dump)
{
char c = reload_completed ? 'a' : 'b';
*************** sched_finish (void)
*** 2801,2813 ****
c, nr_be_in_control);
}
#ifdef ENABLE_CHECKING
/* After reload ia64 backend clobbers CFG, so can't check anything. */
if (!reload_completed)
check_cfg (0, 0);
#endif
- - current_sched_info = NULL;
}
/* Fix INSN_TICKs of the instructions in the current block as well as
--- 2985,3021 ----
c, nr_be_in_control);
}
+ /* Finalize h_i_d, dependency caches, and luids for the whole
+ function. Target will be finalized in md_global_finish (). */
+ sched_deps_finish ();
+ sched_finish_luids ();
+ current_sched_info = NULL;
+ sched_finish ();
+ }
+ + /* Free global data used during insn scheduling. This function works with + the common data shared between the schedulers. */
+ + void
+ sched_finish (void)
+ {
+ haifa_finish_h_i_d ();
+ free (curr_state);
+ + if (targetm.sched.md_finish_global)
+ targetm.sched.md_finish_global (sched_dump, sched_verbose);
+ + end_alias_analysis ();
+ + regstat_free_calls_crossed ();
+ + dfa_finish ();
+ #ifdef ENABLE_CHECKING
/* After reload ia64 backend clobbers CFG, so can't check anything. */
if (!reload_completed)
check_cfg (0, 0);
#endif
}
/* Fix INSN_TICKs of the instructions in the current block as well as
*************** fix_inter_tick (rtx head, rtx tail)
*** 2883,2888 ****
--- 3091,3098 ----
}
bitmap_clear (&processed);
}
+ + static int haifa_speculate_insn (rtx, ds_t, rtx *);
/* Check if NEXT is ready to be added to the ready or queue list.
If "yes", add it to the proper list.
*************** try_ready (rtx next)
*** 2942,2948 ****
*ts = ds_merge (*ts, ds);
}
! if (dep_weak (*ts) < spec_info->weakness_cutoff)
/* Too few points. */
*ts = (*ts & ~SPECULATIVE) | HARD_DEP;
}
--- 3152,3158 ----
*ts = ds_merge (*ts, ds);
}
! if (dep_weak (*ts) < spec_info->data_weakness_cutoff)
/* Too few points. */
*ts = (*ts & ~SPECULATIVE) | HARD_DEP;
}
*************** try_ready (rtx next)
*** 2976,2982 ****
gcc_assert ((*ts & SPECULATIVE) && !(*ts & ~SPECULATIVE));
! res = speculate_insn (next, *ts, &new_pat);

switch (res)
{
--- 3186,3192 ----
gcc_assert ((*ts & SPECULATIVE) && !(*ts & ~SPECULATIVE));
! res = haifa_speculate_insn (next, *ts, &new_pat);

switch (res)
{
*************** try_ready (rtx next)
*** 3000,3006 ****
save it. */
ORIG_PAT (next) = PATTERN (next);
! change_pattern (next, new_pat);
break;
default:
--- 3210,3216 ----
save it. */
ORIG_PAT (next) = PATTERN (next);
! haifa_change_pattern (next, new_pat);
break;
default:
*************** try_ready (rtx next)
*** 3031,3037 ****
ORIG_PAT field. Except one case - speculation checks have ORIG_PAT
pat too, so skip them. */
{
! change_pattern (next, ORIG_PAT (next));
ORIG_PAT (next) = 0;
}
--- 3241,3247 ----
ORIG_PAT field. Except one case - speculation checks have ORIG_PAT
pat too, so skip them. */
{
! haifa_change_pattern (next, ORIG_PAT (next));
ORIG_PAT (next) = 0;
}
*************** change_queue_index (rtx next, int delay)
*** 3150,3237 ****
}
}
! /* Extend H_I_D data. */
! static void
! extend_h_i_d (void)
! {
! /* We use LUID 0 for the fake insn (UID 0) which holds dependencies for
! pseudos which do not cross calls. */
! int new_max_uid = get_max_uid () + 1; ! ! h_i_d = xrecalloc (h_i_d, new_max_uid, old_max_uid, sizeof (*h_i_d));
! old_max_uid = new_max_uid;
! ! if (targetm.sched.h_i_d_extended)
! targetm.sched.h_i_d_extended ();
! }
! /* Extend READY, READY_TRY and CHOICE_STACK arrays.
! N_NEW_INSNS is the number of additional elements to allocate. */
! static void
! extend_ready (int n_new_insns)
{
int i;
! readyp->veclen = rgn_n_insns + n_new_insns + 1 + issue_rate;
! readyp->vec = XRESIZEVEC (rtx, readyp->vec, readyp->veclen);
! ! ready_try = xrecalloc (ready_try, rgn_n_insns + n_new_insns + 1,
! rgn_n_insns + 1, sizeof (char));
! rgn_n_insns += n_new_insns;
choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
! rgn_n_insns + 1);
! for (i = rgn_n_insns; n_new_insns--; i--)
choice_stack[i].state = xmalloc (dfa_state_size);
}
! /* Extend global scheduler structures (those, that live across calls to
! schedule_block) to include information about just emitted INSN. */
! static void
! extend_global (rtx insn)
{
! gcc_assert (INSN_P (insn));
! ! /* These structures have scheduler scope. */
! ! /* Init h_i_d. */
! extend_h_i_d ();
! init_h_i_d (insn);
! /* Init data handled in sched-deps.c. */
! sd_init_insn (insn);
! /* Extend dependency caches by one element. */
! extend_dependency_caches (1, false);
! }
! /* Extends global and local scheduler structures to include information
! about just emitted INSN. */
! static void
! extend_all (rtx insn)
! { ! extend_global (insn);
! /* These structures have block scope. */
! extend_ready (1);
! ! (*current_sched_info->add_remove_insn) (insn, 0);
}
! /* Initialize h_i_d entry of the new INSN with default values.
! Values, that are not explicitly initialized here, hold zero. */
! static void
! init_h_i_d (rtx insn)
{
! INSN_LUID (insn) = luid++;
! INSN_COST (insn) = -1;
! TODO_SPEC (insn) = HARD_DEP;
! QUEUE_INDEX (insn) = QUEUE_NOWHERE;
! INSN_TICK (insn) = INVALID_TICK;
! INTER_TICK (insn) = INVALID_TICK;
! find_insn_reg_weight1 (insn);
}
/* Generates recovery code for INSN. */
--- 3360,3429 ----
}
}
! static int sched_ready_n_insns = -1;
! /* Initialize per region data structures. */
! void
! sched_extend_ready_list (int new_sched_ready_n_insns)
{
int i;
! if (sched_ready_n_insns == -1)
! /* At the first call we need to initialize one more choice_stack
! entry. */
! {
! i = 0;
! sched_ready_n_insns = 0;
! }
! else
! i = sched_ready_n_insns + 1;
! ready.veclen = new_sched_ready_n_insns + issue_rate;
! ready.vec = XRESIZEVEC (rtx, ready.vec, ready.veclen);
! ! gcc_assert (new_sched_ready_n_insns >= sched_ready_n_insns);
+ ready_try = xrecalloc (ready_try, new_sched_ready_n_insns,
+ sched_ready_n_insns, sizeof (*ready_try));
+ + /* We allocate +1 element to save initial state in the choice_stack[0]
+ entry. */
choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
! new_sched_ready_n_insns + 1);
! for (; i <= new_sched_ready_n_insns; i++)
choice_stack[i].state = xmalloc (dfa_state_size);
+ + sched_ready_n_insns = new_sched_ready_n_insns;
}
! /* Free per region data structures. */
! void
! sched_finish_ready_list (void)
{
! int i;
! free (ready.vec);
! ready.vec = NULL;
! ready.veclen = 0;
! free (ready_try);
! ready_try = NULL;
! for (i = 0; i <= sched_ready_n_insns; i++)
! free (choice_stack [i].state);
! free (choice_stack);
! choice_stack = NULL;
! sched_ready_n_insns = -1;
}
! static int
! haifa_luid_for_non_insn (rtx x)
{
! gcc_assert (NOTE_P (x) || LABEL_P (x));
! ! return 0;
}
/* Generates recovery code for INSN. */
*************** process_insn_forw_deps_be_in_spec (rtx i
*** 3282,3288 ****
it can be removed from the ready (or queue) list only
due to backend decision. Hence we can't let the
probability of the speculative dep to decrease. */
! dep_weak (ds) <= dep_weak (fs))
{
ds_t new_ds;
--- 3474,3480 ----
it can be removed from the ready (or queue) list only
due to backend decision. Hence we can't let the
probability of the speculative dep to decrease. */
! ds_weak (ds) <= ds_weak (fs))
{
ds_t new_ds;
*************** begin_speculative_block (rtx insn)
*** 3323,3328 ****
--- 3515,3522 ----
TODO_SPEC (insn) &= ~BEGIN_SPEC;
}
+ static void haifa_init_insn (rtx);
+ /* Generates recovery code for BE_IN speculative INSN. */
static void
add_to_speculative_block (rtx insn)
*************** add_to_speculative_block (rtx insn)
*** 3390,3396 ****
rec = BLOCK_FOR_INSN (check);
twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
! extend_global (twin);
sd_copy_back_deps (twin, insn, true);
--- 3584,3590 ----
rec = BLOCK_FOR_INSN (check);
twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
! haifa_init_insn (twin);
sd_copy_back_deps (twin, insn, true);
*************** dep_weak (ds_t ds)
*** 3483,3495 ****
do
{
if (ds & dt)
! {
! res *= (ds_t) get_dep_weak (ds, dt);
! n++;
! }
if (dt == LAST_SPEC_TYPE)
! break;
dt <<= SPEC_TYPE_SHIFT;
}
while (1);
--- 3677,3689 ----
do
{
if (ds & dt)
! {
! res *= (ds_t) get_dep_weak (ds, dt);
! n++;
! }
if (dt == LAST_SPEC_TYPE)
! break;
dt <<= SPEC_TYPE_SHIFT;
}
while (1);
*************** dep_weak (ds_t ds)
*** 3508,3514 ****
/* Helper function.
Find fallthru edge from PRED. */
! static edge
find_fallthru_edge (basic_block pred)
{
edge e;
--- 3702,3708 ----
/* Helper function.
Find fallthru edge from PRED. */
! edge
find_fallthru_edge (basic_block pred)
{
edge e;
*************** find_fallthru_edge (basic_block pred)
*** 3542,3548 ****
/* Initialize BEFORE_RECOVERY variable. */
static void
! init_before_recovery (void)
{
basic_block last;
edge e;
--- 3736,3742 ----
/* Initialize BEFORE_RECOVERY variable. */
static void
! init_before_recovery (basic_block *before_recovery_ptr)
{
basic_block last;
edge e;
*************** init_before_recovery (void)
*** 3561,3570 ****
basic_block single, empty;
rtx x, label;
! single = create_empty_bb (last);
! empty = create_empty_bb (single); ! single->count = last->count; empty->count = last->count;
single->frequency = last->frequency;
empty->frequency = last->frequency;
--- 3755,3778 ----
basic_block single, empty;
rtx x, label;
! /* If the fallthrough edge to exit we've found is from the block we've ! created before, don't do anything more. */
! if (last == after_recovery)
! return;
! adding_bb_to_current_region_p = false;
! ! single = sched_create_empty_bb (last);
! empty = sched_create_empty_bb (single);
! ! /* Add new blocks to the root loop. */
! if (current_loops != NULL)
! {
! add_bb_to_loop (single, VEC_index (loop_p, current_loops->larray, 0));
! add_bb_to_loop (empty, VEC_index (loop_p, current_loops->larray, 0));
! }
! ! single->count = last->count;
empty->count = last->count;
single->frequency = last->frequency;
empty->frequency = last->frequency;
*************** init_before_recovery (void)
*** 3580,3593 ****
x = emit_jump_insn_after (gen_jump (label), BB_END (single));
JUMP_LABEL (x) = label;
LABEL_NUSES (label)++;
! extend_global (x);
emit_barrier_after (x);
! add_block (empty, 0);
! add_block (single, 0);
before_recovery = single;
if (sched_verbose >= 2 && spec_info->dump)
fprintf (spec_info->dump,
--- 3788,3807 ----
x = emit_jump_insn_after (gen_jump (label), BB_END (single));
JUMP_LABEL (x) = label;
LABEL_NUSES (label)++;
! haifa_init_insn (x);
emit_barrier_after (x);
! sched_init_only_bb (empty, NULL);
! sched_init_only_bb (single, NULL);
! sched_extend_bb ();
+ adding_bb_to_current_region_p = true;
before_recovery = single;
+ after_recovery = empty;
+ + if (before_recovery_ptr)
+ *before_recovery_ptr = before_recovery;
if (sched_verbose >= 2 && spec_info->dump)
fprintf (spec_info->dump,
*************** init_before_recovery (void)
*** 3599,3606 ****
}
/* Returns new recovery block. */
! static basic_block
! create_recovery_block (void)
{
rtx label;
rtx barrier;
--- 3813,3820 ----
}
/* Returns new recovery block. */
! basic_block
! sched_create_recovery_block (basic_block *before_recovery_ptr)
{
rtx label;
rtx barrier;
*************** create_recovery_block (void)
*** 3609,3616 ****
haifa_recovery_bb_recently_added_p = true;
haifa_recovery_bb_ever_added_p = true;
! if (!before_recovery)
! init_before_recovery ();
barrier = get_last_bb_insn (before_recovery);
gcc_assert (BARRIER_P (barrier));
--- 3823,3829 ----
haifa_recovery_bb_recently_added_p = true;
haifa_recovery_bb_ever_added_p = true;
! init_before_recovery (before_recovery_ptr);
barrier = get_last_bb_insn (before_recovery);
gcc_assert (BARRIER_P (barrier));
*************** create_recovery_block (void)
*** 3629,3639 ****
fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
rec->index);
- before_recovery = rec;
- return rec;
}
/* This function creates recovery code for INSN. If MUTATE_P is nonzero,
INSN is a simple check, that should be converted to branchy one. */
static void
--- 3842,3898 ----
fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
rec->index);
return rec;
}
+ /* Create edges: FIRST_BB -> REC; FIRST_BB -> SECOND_BB; REC -> SECOND_BB
+ and emit necessary jumps. */
+ void
+ sched_create_recovery_edges (basic_block first_bb, basic_block rec,
+ basic_block second_bb)

There is no log entry for this new function.


+ {
+ rtx label;
+ rtx jump;
+ edge e;
+ int edge_flags;
+ + /* This is fixing of incoming edge. */
+ /* ??? Which other flags should be specified? */ + if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
+ /* Partition type is the same, if it is "unpartitioned". */
+ edge_flags = EDGE_CROSSING;
+ else
+ edge_flags = 0;
+ + e = make_edge (first_bb, rec, edge_flags);
+ label = block_label (second_bb);
+ jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
+ JUMP_LABEL (jump) = label;
+ LABEL_NUSES (label)++;
+ + if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
+ /* Partition type is the same, if it is "unpartitioned". */
+ {
+ /* Rewritten from cfgrtl.c. */
+ if (flag_reorder_blocks_and_partition
+ && targetm.have_named_sections
+ /*&& !any_condjump_p (jump)*/)
+ /* any_condjump_p (jump) == false.
+ We don't need the same note for the check because
+ any_condjump_p (check) == true. */
+ {
+ REG_NOTES (jump) = gen_rtx_EXPR_LIST (REG_CROSSING_JUMP,
+ NULL_RTX,
+ REG_NOTES (jump));
+ }
+ edge_flags = EDGE_CROSSING;
+ }
+ else
+ edge_flags = 0; + + make_single_succ_edge (rec, second_bb, edge_flags); + }
+ /* This function creates recovery code for INSN. If MUTATE_P is nonzero,
INSN is a simple check, that should be converted to branchy one. */
static void
*************** create_check_block_twin (rtx insn, bool *** 3645,3670 ****
sd_iterator_def sd_it;
dep_t dep;
dep_def _new_dep, *new_dep = &_new_dep;
! gcc_assert (ORIG_PAT (insn)
! && (!mutate_p ! || (IS_SPECULATION_SIMPLE_CHECK_P (insn)
! && !(TODO_SPEC (insn) & SPECULATIVE))));
/* Create recovery block. */
! if (mutate_p || targetm.sched.needs_block_p (insn))
{
! rec = create_recovery_block ();
label = BB_HEAD (rec);
}
else
{
rec = EXIT_BLOCK_PTR;
! label = 0;
}
/* Emit CHECK. */
! check = targetm.sched.gen_check (insn, label, mutate_p);
if (rec != EXIT_BLOCK_PTR)
{
--- 3904,3939 ----
sd_iterator_def sd_it;
dep_t dep;
dep_def _new_dep, *new_dep = &_new_dep;
+ ds_t todo_spec;
! gcc_assert (ORIG_PAT (insn) != NULL_RTX);
! ! if (!mutate_p)
! todo_spec= TODO_SPEC (insn);
! else
! {
! gcc_assert (IS_SPECULATION_SIMPLE_CHECK_P (insn)
! && (TODO_SPEC (insn) & SPECULATIVE) == 0);
! ! todo_spec = CHECK_SPEC (insn);
! }
! ! todo_spec &= SPECULATIVE;
/* Create recovery block. */
! if (mutate_p || targetm.sched.needs_block_p (todo_spec))
{
! rec = sched_create_recovery_block (NULL);
label = BB_HEAD (rec);
}
else
{
rec = EXIT_BLOCK_PTR;
! label = NULL_RTX;
}
/* Emit CHECK. */
! check = targetm.sched.gen_spec_check (insn, label, todo_spec);
if (rec != EXIT_BLOCK_PTR)
{
*************** create_check_block_twin (rtx insn, bool *** 3680,3686 ****
check = emit_insn_before (check, insn);
/* Extend data structures. */
! extend_all (check);
RECOVERY_BLOCK (check) = rec;
if (sched_verbose && spec_info->dump)
--- 3949,3963 ----
check = emit_insn_before (check, insn);
/* Extend data structures. */
! haifa_init_insn (check);
! ! /* CHECK is being added to current region. Extend ready list. */
! gcc_assert (sched_ready_n_insns != -1);
! sched_extend_ready_list (sched_ready_n_insns + 1);
! ! if (current_sched_info->add_remove_insn)
! current_sched_info->add_remove_insn (insn, 0);
! RECOVERY_BLOCK (check) = rec;
if (sched_verbose && spec_info->dump)
*************** create_check_block_twin (rtx insn, bool *** 3707,3713 ****
}
twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
! extend_global (twin);
if (sched_verbose && spec_info->dump)
/* INSN_BB (insn) isn't determined for twin insns yet.
--- 3984,3990 ----
}
twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
! haifa_init_insn (twin);
if (sched_verbose && spec_info->dump)
/* INSN_BB (insn) isn't determined for twin insns yet.
*************** create_check_block_twin (rtx insn, bool *** 3733,3790 ****
{
basic_block first_bb, second_bb;
rtx jump;
- edge e;
- int edge_flags;
first_bb = BLOCK_FOR_INSN (check);
! e = split_block (first_bb, check);
! /* split_block emits note if *check == BB_END. Probably it ! is better to rip that note off. */
! gcc_assert (e->src == first_bb);
! second_bb = e->dest;
! /* This is fixing of incoming edge. */
! /* ??? Which other flags should be specified? */ ! if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
! /* Partition type is the same, if it is "unpartitioned". */
! edge_flags = EDGE_CROSSING;
! else
! edge_flags = 0;
! ! e = make_edge (first_bb, rec, edge_flags);
! add_block (second_bb, first_bb);
! ! gcc_assert (NOTE_INSN_BASIC_BLOCK_P (BB_HEAD (second_bb)));
! label = block_label (second_bb);
! jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
! JUMP_LABEL (jump) = label;
! LABEL_NUSES (label)++;
! extend_global (jump);
! if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
! /* Partition type is the same, if it is "unpartitioned". */
! {
! /* Rewritten from cfgrtl.c. */
! if (flag_reorder_blocks_and_partition
! && targetm.have_named_sections
! /*&& !any_condjump_p (jump)*/)
! /* any_condjump_p (jump) == false.
! We don't need the same note for the check because
! any_condjump_p (check) == true. */
! {
! REG_NOTES (jump) = gen_rtx_EXPR_LIST (REG_CROSSING_JUMP,
! NULL_RTX,
! REG_NOTES (jump));
! }
! edge_flags = EDGE_CROSSING;
! }
! else
! edge_flags = 0; ! ! make_single_succ_edge (rec, second_bb, edge_flags); ! ! add_block (rec, EXIT_BLOCK_PTR);
}
/* Move backward dependences from INSN to CHECK and --- 4010,4026 ----
{
basic_block first_bb, second_bb;
rtx jump;
first_bb = BLOCK_FOR_INSN (check);
! second_bb = sched_split_block (first_bb, check);
! sched_create_recovery_edges (first_bb, rec, second_bb);
! sched_init_only_bb (second_bb, first_bb); ! sched_init_only_bb (rec, EXIT_BLOCK_PTR);
! jump = BB_END (rec);
! haifa_init_insn (jump);
}
/* Move backward dependences from INSN to CHECK and *************** fix_recovery_deps (basic_block rec)
*** 4000,4018 ****
add_jump_dependencies (insn, jump);
}
! /* Changes pattern of the INSN to NEW_PAT. */
! static void
! change_pattern (rtx insn, rtx new_pat)
{
int t;
t = validate_change (insn, &PATTERN (insn), new_pat, 0);
gcc_assert (t);
/* Invalidate INSN_COST, so it'll be recalculated. */
INSN_COST (insn) = -1;
/* Invalidate INSN_TICK, so it'll be recalculated. */
INSN_TICK (insn) = INVALID_TICK;
- dfa_clear_single_insn_cache (insn);
}
/* Return true if INSN can potentially be speculated with type DS. */
--- 4236,4263 ----
add_jump_dependencies (insn, jump);
}
! /* Change pattern of INSN to NEW_PAT. */
! void
! sched_change_pattern (rtx insn, rtx new_pat)
{
int t;
t = validate_change (insn, &PATTERN (insn), new_pat, 0);
gcc_assert (t);
+ dfa_clear_single_insn_cache (insn);
+ }
+ + /* Change pattern of INSN to NEW_PAT. Invalidate cached haifa
+ instruction data. */
+ static void
+ haifa_change_pattern (rtx insn, rtx new_pat)
+ {
+ sched_change_pattern (insn, new_pat);
+ /* Invalidate INSN_COST, so it'll be recalculated. */
INSN_COST (insn) = -1;
/* Invalidate INSN_TICK, so it'll be recalculated. */
INSN_TICK (insn) = INVALID_TICK;
}
/* Return true if INSN can potentially be speculated with type DS. */
*************** sched_insn_is_legitimate_for_speculation
*** 4028,4034 ****
if (SCHED_GROUP_P (insn))
return false;
! if (IS_SPECULATION_CHECK_P (insn))
return false;
if (side_effects_p (PATTERN (insn)))
--- 4273,4279 ----
if (SCHED_GROUP_P (insn))
return false;
! if (IS_SPECULATION_CHECK_P ((rtx) insn))
return false;
if (side_effects_p (PATTERN (insn)))
*************** sched_insn_is_legitimate_for_speculation
*** 4045,4052 ****
0 - for speculation with REQUEST mode it is OK to use
current instruction pattern,
1 - need to change pattern for *NEW_PAT to be speculative. */
! static int
! speculate_insn (rtx insn, ds_t request, rtx *new_pat)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION
&& (request & SPECULATIVE)
--- 4290,4297 ----
0 - for speculation with REQUEST mode it is OK to use
current instruction pattern,
1 - need to change pattern for *NEW_PAT to be speculative. */
! int
! sched_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION
&& (request & SPECULATIVE)
*************** speculate_insn (rtx insn, ds_t request, *** 4059,4065 ****
&& !(request & BEGIN_SPEC))
return 0;
! return targetm.sched.speculate_insn (insn, request & BEGIN_SPEC, new_pat);
}
/* Print some information about block BB, which starts with HEAD and
--- 4304,4323 ----
&& !(request & BEGIN_SPEC))
return 0;
! return targetm.sched.speculate_insn (insn, request, new_pat);
! }
! ! static int
! haifa_speculate_insn (rtx insn, ds_t request, rtx *new_pat)
! {
! gcc_assert (sched_deps_info->generate_spec_deps
! && !IS_SPECULATION_CHECK_P (insn));
! ! if (HAS_INTERNAL_DEP (insn)
! || SCHED_GROUP_P (insn))
! return -1;
! ! return sched_speculate_insn (insn, request, new_pat);
}
/* Print some information about block BB, which starts with HEAD and
*************** restore_bb_notes (basic_block first)
*** 4173,4219 ****
bb_header = 0;
}
- /* Extend per basic block data structures of the scheduler.
- If BB is NULL, initialize structures for the whole CFG.
- Otherwise, initialize them for the just created BB. */
- static void
- extend_bb (void)
- {
- rtx insn;
- - old_last_basic_block = last_basic_block;
- - /* The following is done to keep current_sched_info->next_tail non null. */
- - insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
- if (NEXT_INSN (insn) == 0
- || (!NOTE_P (insn)
- && !LABEL_P (insn)
- /* Don't emit a NOTE if it would end up before a BARRIER. */
- && !BARRIER_P (NEXT_INSN (insn))))
- {
- rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
- /* Make insn appear outside BB. */
- set_block_for_insn (note, NULL);
- BB_END (EXIT_BLOCK_PTR->prev_bb) = insn;
- }
- }
- - /* Add a basic block BB to extended basic block EBB.
- If EBB is EXIT_BLOCK_PTR, then BB is recovery block.
- If EBB is NULL, then BB should be a new region. */
- void
- add_block (basic_block bb, basic_block ebb)
- {
- gcc_assert (current_sched_info->flags & NEW_BBS);
- - extend_bb ();
- - if (current_sched_info->add_block)
- /* This changes only data structures of the front-end. */
- current_sched_info->add_block (bb, ebb);
- }
- /* Helper function.
Fix CFG after both in- and inter-block movement of
control_flow_insn_p JUMP. */
--- 4431,4436 ----
*************** fix_jump_move (rtx jump)
*** 4226,4232 ****
jump_bb = BLOCK_FOR_INSN (jump);
jump_bb_next = jump_bb->next_bb;
! gcc_assert (current_sched_info->flags & SCHED_EBB
|| IS_SPECULATION_BRANCHY_CHECK_P (jump));
if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
--- 4443,4449 ----
jump_bb = BLOCK_FOR_INSN (jump);
jump_bb_next = jump_bb->next_bb;
! gcc_assert (common_sched_info->sched_pass_id == SCHED_EBB_PASS
|| IS_SPECULATION_BRANCHY_CHECK_P (jump));
if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
*************** move_block_after_check (rtx jump)
*** 4274,4282 ****
df_mark_solutions_dirty ();
! if (current_sched_info->fix_recovery_cfg)
! current_sched_info->fix_recovery_cfg ! (bb->index, jump_bb->index, jump_bb_next->index);
}
/* Helper function for move_block_after_check.
--- 4491,4498 ----
df_mark_solutions_dirty ();
! common_sched_info->fix_recovery_cfg
! (bb->index, jump_bb->index, jump_bb_next->index);
}
/* Helper function for move_block_after_check.
*************** check_cfg (rtx head, rtx tail)
*** 4502,4507 ****
--- 4718,5026 ----
gcc_assert (bb == 0);
}
+ #endif /* ENABLE_CHECKING */
+ const struct sched_scan_info_def *sched_scan_info;
+ + /* Extend per basic block data structures. */
+ static void
+ extend_bb (void)
+ {
+ if (sched_scan_info->extend_bb)
+ sched_scan_info->extend_bb ();
+ }
+ + /* Init data for BB. */
+ static void
+ init_bb (basic_block bb)
+ {
+ if (sched_scan_info->init_bb)
+ sched_scan_info->init_bb (bb);
+ }
+ + /* Extend per insn data structures. */
+ static void
+ extend_insn (void)
+ {
+ if (sched_scan_info->extend_insn)
+ sched_scan_info->extend_insn ();
+ }
+ + /* Init data structures for INSN. */
+ static void
+ init_insn (rtx insn)
+ {
+ if (sched_scan_info->init_insn)
+ sched_scan_info->init_insn (insn);
+ }
+ + /* Init all insns in BB. */
+ static void
+ init_insns_in_bb (basic_block bb)
+ {
+ rtx insn;
+ + FOR_BB_INSNS (bb, insn)
+ init_insn (insn);
+ }
+ + /* A driver function to add a set of basic blocks (BBS),
+ a single basic block (BB), a set of insns (INSNS) or a single insn (INSN)
+ to the scheduling region. */
+ void
+ sched_scan (const struct sched_scan_info_def *ssi,
+ bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
+ {
+ sched_scan_info = ssi;
+ + if (bbs != NULL || bb != NULL)
+ {
+ extend_bb ();
+ + if (bbs != NULL)
+ {
+ unsigned i;
+ basic_block x;
+ + for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
+ init_bb (x);
+ }
+ + if (bb != NULL)
+ init_bb (bb);
+ }
+ + extend_insn ();
+ + if (bbs != NULL)
+ { + unsigned i;
+ basic_block x;
+ + for (i = 0; VEC_iterate (basic_block, bbs, i, x); i++)
+ init_insns_in_bb (x);
+ }
+ + if (bb != NULL)
+ init_insns_in_bb (bb);
+ + if (insns != NULL)
+ {
+ unsigned i;
+ rtx x;
+ + for (i = 0; VEC_iterate (rtx, insns, i, x); i++)
+ init_insn (x);
+ }
+ + if (insn != NULL)
+ init_insn (insn);
+ }
+ + + /* Extend per basic block data structures. */
+ static void
+ sched_extend_bb (void)
+ {
+ rtx insn;
+ + /* The following is done to keep current_sched_info->next_tail non null. */
+ insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
+ if (NEXT_INSN (insn) == 0
+ || (!NOTE_P (insn)
+ && !LABEL_P (insn)
+ /* Don't emit a NOTE if it would end up before a BARRIER. */
+ && !BARRIER_P (NEXT_INSN (insn))))
+ {
+ rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
+ /* Make insn appear outside BB. */
+ set_block_for_insn (note, NULL);
+ BB_END (EXIT_BLOCK_PTR->prev_bb) = insn;
+ }
+ }
+ + /* Init per basic block data structures. */
+ void
+ sched_init_bbs (void)
+ {
+ sched_extend_bb ();
+ }
+ + /* Extend data structures for logical insn UID. */
+ static void
+ luids_extend_insn (void)
+ {
+ int new_luids_max_uid = get_max_uid () + 1;
+ + VEC_safe_grow_cleared (int, heap, sched_luids, new_luids_max_uid);
+ }
+ + /* Initialize LUID for INSN. */
+ static void
+ luids_init_insn (rtx insn)
+ {
+ int i = INSN_P (insn) ? 1 : common_sched_info->luid_for_non_insn (insn);
+ int luid;
+ + if (i >= 0)
+ {
+ luid = sched_max_luid;
+ sched_max_luid += i;
+ }
+ else
+ luid = -1;
+ + SET_INSN_LUID (insn, luid);
+ }
+ + /* Initialize luids for BBS, BB, INSNS and INSN.
+ The hook common_sched_info->luid_for_non_insn () is used to determine
+ if notes, labels, etc. need luids. */
+ void
+ sched_init_luids (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
+ {
+ const struct sched_scan_info_def ssi =
+ {
+ NULL, /* extend_bb */
+ NULL, /* init_bb */
+ luids_extend_insn, /* extend_insn */
+ luids_init_insn /* init_insn */
+ };
+ + sched_scan (&ssi, bbs, bb, insns, insn);
+ }
+ + /* Free LUIDs. */
+ void
+ sched_finish_luids (void)
+ {
+ VEC_free (int, heap, sched_luids);
+ sched_max_luid = 1;
+ }
+ + /* Return logical uid of INSN. Helpful while debugging. */
+ int
+ insn_luid (rtx insn)
+ {
+ return INSN_LUID (insn);
+ }
+ + /* Extend per insn data in the target. */
+ void
+ sched_extend_target (void)
+ {
+ if (targetm.sched.h_i_d_extended)
+ targetm.sched.h_i_d_extended ();
+ }
+ + /* Extend global scheduler structures (those, that live across calls to
+ schedule_block) to include information about just emitted INSN. */
+ static void
+ extend_h_i_d (void)
+ {
+ int reserve = (get_max_uid () + 1 + - VEC_length (haifa_insn_data_def, h_i_d));
+ if (reserve > 0 + && ! VEC_space (haifa_insn_data_def, h_i_d, reserve))
+ {
+ VEC_safe_grow_cleared (haifa_insn_data_def, heap, h_i_d, + 3 * get_max_uid () / 2);
+ sched_extend_target ();
+ }
+ }
+ + /* Initialize h_i_d entry of the INSN with default values.
+ Values, that are not explicitly initialized here, hold zero. */
+ static void
+ init_h_i_d (rtx insn)
+ {
+ if (INSN_LUID (insn) > 0)
+ {
+ INSN_COST (insn) = -1;
+ find_insn_reg_weight (insn);
+ QUEUE_INDEX (insn) = QUEUE_NOWHERE;
+ INSN_TICK (insn) = INVALID_TICK;
+ INTER_TICK (insn) = INVALID_TICK;
+ TODO_SPEC (insn) = HARD_DEP;
+ }
+ }
+ + /* Initialize haifa_insn_data for BBS, BB, INSNS and INSN. */
+ void
+ haifa_init_h_i_d (bb_vec_t bbs, basic_block bb, insn_vec_t insns, rtx insn)
+ {
+ const struct sched_scan_info_def ssi =
+ {
+ NULL, /* extend_bb */
+ NULL, /* init_bb */
+ extend_h_i_d, /* extend_insn */
+ init_h_i_d /* init_insn */
+ };
+ + sched_scan (&ssi, bbs, bb, insns, insn);
+ }
+ + /* Finalize haifa_insn_data. */
+ void
+ haifa_finish_h_i_d (void)
+ {
+ VEC_free (haifa_insn_data_def, heap, h_i_d);
+ }
+ + /* Init data for the new insn INSN. */
+ static void
+ haifa_init_insn (rtx insn)
+ {
+ gcc_assert (insn != NULL);
+ + sched_init_luids (NULL, NULL, NULL, insn);
+ sched_extend_target ();
+ sched_deps_init (false);
+ haifa_init_h_i_d (NULL, NULL, NULL, insn);
+ + if (adding_bb_to_current_region_p)
+ {
+ sd_init_insn (insn);
+ + /* Extend dependency caches by one element. */
+ extend_dependency_caches (1, false);
+ }
+ }
+ + /* Init data for the new basic block BB which comes after AFTER. */
+ static void
+ haifa_init_only_bb (basic_block bb, basic_block after)
+ {
+ gcc_assert (bb != NULL);
+ + sched_init_bbs ();
+ + if (common_sched_info->add_block)
+ /* This changes only data structures of the front-end. */
+ common_sched_info->add_block (bb, after);
+ }
+ + /* A generic version of sched_split_block (). */
+ basic_block
+ sched_split_block_1 (basic_block first_bb, rtx after)
+ {
+ edge e;
+ + e = split_block (first_bb, after);
+ gcc_assert (e->src == first_bb);
+ + /* sched_split_block emits note if *check == BB_END. Probably it + is better to rip that note off. */
+ + return e->dest;
+ }
+ + /* A generic version of sched_create_empty_bb (). */
+ basic_block
+ sched_create_empty_bb_1 (basic_block after)
+ {
+ return create_empty_bb (after);
+ }
+ #endif /* INSN_SCHEDULING */
diff -cprNd -x .svn -x .hg trunk/gcc/modulo-sched.c sel-sched-branch/gcc/modulo-sched.c
*** trunk/gcc/modulo-sched.c Tue Apr 15 20:10:00 2008
--- sel-sched-branch/gcc/modulo-sched.c Tue May 13 11:10:05 2008
*************** static int compute_split_row (sbitmap, i
*** 187,199 ****
/* This page defines constants and structures for the modulo scheduling
driver. */
- /* As in haifa-sched.c: */
- /* issue_rate is the number of insns that can be scheduled in the same
- machine cycle. It can be defined in the config/mach/mach.h file,
- otherwise we set it to 1. */
- - static int issue_rate;
-

There is no log entry for the change.


static int sms_order_nodes (ddg_ptr, int, int *, int *);
static void set_node_sched_params (ddg_ptr);
static partial_schedule_ptr sms_schedule_by_order (ddg_ptr, int, int, int *);
--- 187,192 ----
*************** typedef struct node_sched_params
*** 242,248 ****
code in order to use sched_analyze() for computing the dependencies.
They are used when initializing the sched_info structure. */
static const char *
! sms_print_insn (rtx insn, int aligned ATTRIBUTE_UNUSED)
{
static char tmp[80];
--- 235,241 ----
code in order to use sched_analyze() for computing the dependencies.
They are used when initializing the sched_info structure. */
static const char *
! sms_print_insn (const_rtx insn, int aligned ATTRIBUTE_UNUSED)
{
static char tmp[80];
*************** compute_jump_reg_dependencies (rtx insn *** 258,264 ****
{
}
! static struct sched_info sms_sched_info =
{
NULL,
NULL,
--- 251,267 ----
{
}
! static struct common_sched_info_def sms_common_sched_info;
! ! static struct sched_deps_info_def sms_sched_deps_info =
! {
! compute_jump_reg_dependencies,
! NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
! NULL,
! 0, 0, 0
! };
! ! static struct haifa_sched_info sms_sched_info =
{
NULL,
NULL,

There are no comments about the variables.


*************** static struct sched_info sms_sched_info *** 267,282 ****
NULL,
sms_print_insn,
NULL,
- compute_jump_reg_dependencies,
NULL, NULL,
NULL, NULL,
! 0, 0, 0,
! NULL, NULL, NULL, NULL, NULL,
0
};
- /* Given HEAD and TAIL which are the first and last insns in a loop;
return the register which controls the loop. Return zero if it has
more than one occurrence in the loop besides the control part or the
--- 270,283 ----
NULL,
sms_print_insn,
NULL,
NULL, NULL,
NULL, NULL,
! 0, 0,
! NULL, NULL, NULL, 0
};
/* Given HEAD and TAIL which are the first and last insns in a loop;
return the register which controls the loop. Return zero if it has
more than one occurrence in the loop besides the control part or the
*************** canon_loop (struct loop *loop)
*** 856,861 ****
--- 857,875 ----
}
}
+ /* Setup infos. */
+ static void
+ setup_sched_infos (void)
+ {
+ memcpy (&sms_common_sched_info, &haifa_common_sched_info,
+ sizeof (sms_common_sched_info));
+ sms_common_sched_info.sched_pass_id = SCHED_SMS_PASS;
+ common_sched_info = &sms_common_sched_info;
+ + sched_deps_info = &sms_sched_deps_info;
+ current_sched_info = &sms_sched_info;
+ }
+ /* Probability in % that the sms-ed loop rolls enough so that optimized
version may be entered. Just a guess. */
#define PROB_SMS_ENOUGH_ITERATIONS 80
*************** sms_schedule (void)
*** 901,916 ****
issue_rate = 1;
/* Initialize the scheduler. */
! current_sched_info = &sms_sched_info;
! ! /* Init Data Flow analysis, to be used in interloop dep calculation. */
! df_set_flags (DF_LR_RUN_DCE);
! df_rd_add_problem ();
! df_note_add_problem ();
! df_chain_add_problem (DF_DU_CHAIN + DF_UD_CHAIN);
! df_analyze ();
! regstat_compute_calls_crossed ();
! sched_init ();
/* Allocate memory to hold the DDG array one entry for each loop.
We use loop->num as index into this array. */
--- 915,922 ----
issue_rate = 1;
/* Initialize the scheduler. */
! setup_sched_infos ();
! haifa_sched_init ();
/* Allocate memory to hold the DDG array one entry for each loop.
We use loop->num as index into this array. */
*************** sms_schedule (void)
*** 1242,1252 ****
free_ddg (g);
}
- regstat_free_calls_crossed ();
free (g_arr);
/* Release scheduler data, needed until now because of DFA. */
! sched_finish ();
loop_optimizer_finalize ();
}
--- 1248,1257 ----
free_ddg (g);
}
free (g_arr);
/* Release scheduler data, needed until now because of DFA. */
! haifa_sched_finish ();
loop_optimizer_finalize ();
}
diff -cprNd -x .svn -x .hg trunk/gcc/opts.c sel-sched-branch/gcc/opts.c
*** trunk/gcc/opts.c Fri May 30 17:32:06 2008
--- sel-sched-branch/gcc/opts.c Thu May 29 18:28:30 2008
*************** along with GCC; see the file COPYING3. *** 46,51 ****
--- 46,54 ----
unsigned HOST_WIDE_INT g_switch_value;
bool g_switch_set;
+ /* Same for selective scheduling. */
+ bool sel_sched_switch_set;
+

There is no log entry for the addition as in flags.h.


/* True if we should exit after parsing options. */
bool exit_after_options;
*************** decode_options (unsigned int argc, const
*** 1040,1045 ****
--- 1043,1053 ----
flag_reorder_blocks_and_partition = 0;
flag_reorder_blocks = 1;
}
+ + /* Pipelining of outer loops is only possible when general pipelining
+ capabilities are requested. */
+ if (!flag_sel_sched_pipelining)
+ flag_sel_sched_pipelining_outer_loops = 0;
}
#define LEFT_COLUMN 27
*************** common_handle_option (size_t scode, cons
*** 1791,1796 ****
--- 1799,1817 ----
set_random_seed (arg);
break;
+ case OPT_fselective_scheduling:
+ case OPT_fselective_scheduling2:
+ sel_sched_switch_set = true;
+ break;
+ case OPT_fsel_insn_range:
+ if (value)
+ return 0;
+ break;
+ + case OPT_fsel_insn_range_:
+ sel_sched_fix_param ("insn-range", arg);
+ break;
+

I think you should remove fsel-insn-range. It was used only for debugging for the development. I don't think it has a sense for GCC user.

case OPT_fsched_verbose_:
#ifdef INSN_SCHEDULING
fix_sched_param ("verbose", arg);
diff -cprNd -x .svn -x .hg trunk/gcc/params.def sel-sched-branch/gcc/params.def
*** trunk/gcc/params.def Fri May 30 17:32:06 2008
--- sel-sched-branch/gcc/params.def Wed Apr 16 00:20:13 2008
*************** DEFPARAM(PARAM_MAX_SCHED_REGION_INSNS,
*** 565,570 ****
--- 565,580 ----
"The maximum number of insns in a region to be considered for interblock scheduling",
100, 0, 0)
+ DEFPARAM(PARAM_MAX_PIPELINE_REGION_BLOCKS,
+ "max-pipeline-region-blocks",
+ "The maximum number of blocks in a region to be considered for interblock scheduling",
+ 15, 0, 0)

It is not described in the documentation.


+ + DEFPARAM(PARAM_MAX_PIPELINE_REGION_INSNS,
+ "max-pipeline-region-insns",
+ "The maximum number of insns in a region to be considered for interblock scheduling",
+ 200, 0, 0)
+

Ditto.


DEFPARAM(PARAM_MIN_SPEC_PROB,
"min-spec-prob",
"The minimum probability of reaching a source block for interblock speculative scheduling",
*************** DEFPARAM(PARAM_SCHED_SPEC_PROB_CUTOFF,
*** 585,590 ****
--- 595,688 ----
"The minimal probability of speculation success (in percents), so that speculative insn will be scheduled.",
40, 0, 100)
+ DEFPARAM(PARAM_SELSCHED_DUMP_CFG_FLAGS,
+ "selsched-dump-cfg-flags",
+ "Override sel_dump_cfg_flags",
+ 0, 0, 0)
+

I think it should be removed. It was used only for debugging.


+ DEFPARAM(PARAM_SELSCHED_MAX_LOOKAHEAD,
+ "selsched-max-lookahead",
+ "The maximum size of the lookahead window of selective scheduling",
+ 50, 0, 0)
+ + DEFPARAM(PARAM_SELSCHED_MAX_SCHED_TIMES,
+ "selsched-max-sched-times",
+ "Maximum number of times that an insn could be scheduled",
+ 2, 0, 0)
+

It is not described in the documentation.


+ DEFPARAM(PARAM_SELSCHED_INSNS_TO_RENAME,
+ "selsched-insns-to-rename",
+ "Maximum number of instructions in the ready list that are considered eligible for renaming",
+ 2, 0, 0)
+

It is not described in the documentation.


+ /* Minimal distance (in CPU cycles) between store and load targeting same
+ memory locations. */
+ + DEFPARAM (PARAM_SCHED_MEM_TRUE_DEP_COST,
+ "sched-mem-true-dep-cost",
+ "Minimal distance between possibly conflicting store and load",
+ 1, 0, 0)
+

Ditto. Also the log entry for this is absent.


The rest of parameters should be definitely removed.  They were used
only for debugging.

+ DEFPARAM(PARAM_SEL1_START,
+ "sel1-start",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_SEL1_STOP,
+ "sel1-stop",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_SEL1_P,
+ "sel1-p",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_SEL2_START,
+ "sel2-start",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_SEL2_STOP,
+ "sel2-stop",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_SEL2_P,
+ "sel2-p",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_REGION_START,
+ "region-start",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_REGION_STOP,
+ "region-stop",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_REGION_P,
+ "region-p",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_INSN_START,
+ "insn-start",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_INSN_STOP,
+ "insn-stop",
+ "Allow something",
+ 0, 0, 0)
+ + DEFPARAM(PARAM_INSN_P,
+ "insn-p",
+ "Allow something",
+ 0, 0, 0)
+ DEFPARAM(PARAM_MAX_LAST_VALUE_RTL,
"max-last-value-rtl",
"The maximum number of RTL nodes that can be recorded as combiner's last value",




diff -cprNd -x .svn -x .hg trunk/gcc/sched-deps.c sel-sched-branch/gcc/sched-deps.c
*** trunk/gcc/sched-deps.c	Fri May 30 17:32:06 2008
--- sel-sched-branch/gcc/sched-deps.c	Thu May 29 18:28:30 2008
...

*************** flush_pending_lists (struct deps *deps, *** 1428,1450 ****

...


--- 1496,1695 ----

...



+ + /* Internal variable for sched_analyze_[12] () functions.
+ If it is nonzero, this means that sched_analyze_[12] looks
+ at the most toplevel SET. */
+ static bool can_start_lhs_rhs_p;
+ + /* Extend reg info for the deps context DEPS given that + we have just generated a register numbered REGNO. */
+ static void
+ extend_deps_reg_info (struct deps *deps, int regno)

There is no changelog entry for the new function.


+ {
+ int max_regno = regno + 1;
+ + gcc_assert (!reload_completed);
+ + /* In a readonly context, it would not hurt to extend info,
+ but it should not be needed. */
+ if (reload_completed && deps->readonly)
+ {
+ deps->max_reg = max_regno;
+ return;
+ }
+ + if (max_regno > deps->max_reg)
+ {
+ deps->reg_last = XRESIZEVEC (struct deps_reg, deps->reg_last, + max_regno);
+ memset (&deps->reg_last[deps->max_reg],
+ 0, (max_regno - deps->max_reg) + * sizeof (struct deps_reg));
+ deps->max_reg = max_regno;
+ }
+ }
+ + /* Extends REG_INFO_P if needed. */
+ void
+ maybe_extend_reg_info_p (void)
+ {
+ /* Extend REG_INFO_P, if needed. */
+ if ((unsigned int)max_regno - 1 >= reg_info_p_size)
+ {
+ size_t new_reg_info_p_size = max_regno + 128;
+ + gcc_assert (!reload_completed && sel_sched_p ());
+ + reg_info_p = xrecalloc (reg_info_p, new_reg_info_p_size, + reg_info_p_size, sizeof (*reg_info_p));
+ reg_info_p_size = new_reg_info_p_size;
+ }
+ }
+ /* Analyze a single reference to register (reg:MODE REGNO) in INSN.
The type of the reference is specified by REF and can be SET,
CLOBBER, PRE_DEC, POST_DEC, PRE_INC, POST_INC or USE. */
...

*************** init_deps (struct deps *deps)
*** 2452,2457 ****
--- 2890,2897 ----
deps->sched_before_next_call = 0;
deps->in_post_call_group_p = not_post_call;
deps->libcall_block_tail_insn = 0;
+ deps->last_reg_pending_barrier = NOT_A_BARRIER;
+ deps->readonly = 0;
}

There is no changelog entry for the change.



/* Free insn lists found in DEPS. */
*************** free_deps (struct deps *deps)
*** 2485,2526 ****
CLEAR_REG_SET (&deps->reg_conditional_sets);
free (deps->reg_last);
}
! /* If it is profitable to use them, initialize caches for tracking
! dependency information. LUID is the number of insns to be scheduled,
! it is used in the estimate of profitability. */
void
! init_dependency_caches (int luid)
{
/* Average number of insns in the basic block.
'+ 1' is used to make it nonzero. */
! int insns_in_block = luid / n_basic_blocks + 1;
! /* ?!? We could save some memory by computing a per-region luid mapping
! which could reduce both the number of vectors in the cache and the size
! of each vector. Instead we just avoid the cache entirely unless the
! average number of instructions in a basic block is very high. See
! the comment before the declaration of true_dependency_cache for
! what we consider "very high". */
! if (insns_in_block > 100 * 5)
{
cache_size = 0;
! extend_dependency_caches (luid, true);
}
! dl_pool = create_alloc_pool ("deps_list", sizeof (struct _deps_list),
! /* Allocate lists for one block at a time. */
! insns_in_block);
! ! dn_pool = create_alloc_pool ("dep_node", sizeof (struct _dep_node),
! /* Allocate nodes for one block at a time.
! We assume that average insn has
! 5 producers. */
! 5 * insns_in_block);
}
/* Create or extend (depending on CREATE_P) dependency caches to
size N. */
void
--- 2925,3022 ----
CLEAR_REG_SET (&deps->reg_conditional_sets);
free (deps->reg_last);
+ deps->reg_last = NULL;
+ + deps = NULL;
}

There is no changelog entry for the change. ...

*************** finish_deps_global (void)
*** 2615,2621 ****
}
/* Estimate the weakness of dependence between MEM1 and MEM2. */
! static dw_t
estimate_dep_weak (rtx mem1, rtx mem2)
{
rtx r1, r2;
--- 3127,3133 ----
}
/* Estimate the weakness of dependence between MEM1 and MEM2. */
! dw_t
estimate_dep_weak (rtx mem1, rtx mem2)
{
rtx r1, r2;

There is no changelog entry for making the function global.


...


diff -cprNd -x .svn -x .hg trunk/gcc/sched-ebb.c sel-sched-branch/gcc/sched-ebb.c
*** trunk/gcc/sched-ebb.c	Mon Oct 29 18:10:25 2007
--- sel-sched-branch/gcc/sched-ebb.c	Tue May 13 11:10:05 2008
...

*************** begin_schedule_ready (rtx insn, rtx last
*** 184,190 ****
current_sched_info->next_tail = NEXT_INSN (BB_END (bb));
gcc_assert (current_sched_info->next_tail);
! add_block (bb, last_bb);
gcc_assert (last_bb == bb);
}
}
--- 184,191 ----
current_sched_info->next_tail = NEXT_INSN (BB_END (bb));
gcc_assert (current_sched_info->next_tail);
! /* Append new basic block to the end of the ebb. */
! sched_init_only_bb (bb, last_bb);
gcc_assert (last_bb == bb);
}
}

There is no changelog entry for the change. ...


*************** schedule_ebbs (void)
*** 622,644 ****
if (reload_completed)
reposition_prologue_and_epilogue_notes ();
! sched_finish ();
! regstat_free_calls_crossed ();
}
/* INSN has been added to/removed from current ebb. */
static void
! add_remove_insn (rtx insn ATTRIBUTE_UNUSED, int remove_p)
{
if (!remove_p)
! n_insns++;
else
! n_insns--;
}
/* BB was added to ebb after AFTER. */
static void
! add_block1 (basic_block bb, basic_block after)
{
/* Recovery blocks are always bounded by BARRIERS, therefore, they always form single block EBB,
--- 625,646 ----
if (reload_completed)
reposition_prologue_and_epilogue_notes ();
! haifa_sched_finish ();
}
/* INSN has been added to/removed from current ebb. */
static void
! ebb_add_remove_insn (rtx insn ATTRIBUTE_UNUSED, int remove_p)
{
if (!remove_p)
! rgn_n_insns++;
else
! rgn_n_insns--;
}
/* BB was added to ebb after AFTER. */
static void
! ebb_add_block (basic_block bb, basic_block after)
{
/* Recovery blocks are always bounded by BARRIERS, therefore, they always form single block EBB,

You missed changelog entry for renaming add_block1 to ebb_add_block.


...


diff -cprNd -x .svn -x .hg trunk/gcc/sched-int.h sel-sched-branch/gcc/sched-int.h
*** trunk/gcc/sched-int.h	Mon Oct 29 18:10:25 2007
--- sel-sched-branch/gcc/sched-int.h	Thu May 29 18:28:30 2008

...


*************** struct spec_info_def
*** 444,466 ****
/* Minimal cumulative weakness of speculative instruction's
dependencies, so that insn will be scheduled. */
! dw_t weakness_cutoff;
/* Flags from the enum SPEC_SCHED_FLAGS. */
int flags;
};
typedef struct spec_info_def *spec_info_t;
! extern struct sched_info *current_sched_info;
/* Indexed by INSN_UID, the collection of all data associated with
a single instruction. */
! struct haifa_insn_data
{
! /* We can't place 'struct _deps_list' into h_i_d instead of deps_list_t
! because when h_i_d extends, addresses of the deps_list->first
! change without updating deps_list->first->next->prev_nextp. */
/* A list of hard backward dependencies. The insn is a consumer of all the
deps mentioned here. */
--- 616,655 ----
/* Minimal cumulative weakness of speculative instruction's
dependencies, so that insn will be scheduled. */
! dw_t data_weakness_cutoff;
! ! /* Minimal usefulness of speculative instruction to be considered for
! scheduling. */
! int control_weakness_cutoff;

There is no changelog entry for the new member. ...

*************** struct haifa_insn_data
*** 540,572 ****
rtx orig_pat;
};
! extern struct haifa_insn_data *h_i_d;
/* Accessor macros for h_i_d. There are more in haifa-sched.c and
sched-rgn.c. */
! #define INSN_HARD_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].hard_back_deps)
! #define INSN_SPEC_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].spec_back_deps)
! #define INSN_FORW_DEPS(INSN) (h_i_d[INSN_UID (INSN)].forw_deps)
! #define INSN_RESOLVED_BACK_DEPS(INSN) \
! (h_i_d[INSN_UID (INSN)].resolved_back_deps)
! #define INSN_RESOLVED_FORW_DEPS(INSN) \
! (h_i_d[INSN_UID (INSN)].resolved_forw_deps)
! #define INSN_LUID(INSN) (h_i_d[INSN_UID (INSN)].luid)
! #define CANT_MOVE(insn) (h_i_d[INSN_UID (insn)].cant_move)
! #define INSN_PRIORITY(INSN) (h_i_d[INSN_UID (INSN)].priority)
! #define INSN_PRIORITY_STATUS(INSN) (h_i_d[INSN_UID (INSN)].priority_status)
#define INSN_PRIORITY_KNOWN(INSN) (INSN_PRIORITY_STATUS (INSN) > 0)
! #define INSN_REG_WEIGHT(INSN) (h_i_d[INSN_UID (INSN)].reg_weight)
! #define HAS_INTERNAL_DEP(INSN) (h_i_d[INSN_UID (INSN)].has_internal_dep)
! #define TODO_SPEC(INSN) (h_i_d[INSN_UID (INSN)].todo_spec)
! #define DONE_SPEC(INSN) (h_i_d[INSN_UID (INSN)].done_spec)
! #define CHECK_SPEC(INSN) (h_i_d[INSN_UID (INSN)].check_spec)
! #define RECOVERY_BLOCK(INSN) (h_i_d[INSN_UID (INSN)].recovery_block)
! #define ORIG_PAT(INSN) (h_i_d[INSN_UID (INSN)].orig_pat)
/* INSN is either a simple or a branchy speculation check. */
! #define IS_SPECULATION_CHECK_P(INSN) (RECOVERY_BLOCK (INSN) != NULL)
/* INSN is a speculation check that will simply reexecute the speculatively
scheduled instruction if the speculation fails. */
--- 734,789 ----
...

! ! extern VEC(haifa_deps_insn_data_def, heap) *h_d_i_d;
! ! #define HDID(INSN) (VEC_index (haifa_deps_insn_data_def, h_d_i_d, \
! INSN_LUID (INSN)))

There are no changelog entries for h_d_i_d and HDID.


! #define INSN_DEP_COUNT(INSN) (HDID (INSN)->dep_count)
! #define HAS_INTERNAL_DEP(INSN) (HDID (INSN)->has_internal_dep)
! #define INSN_FORW_DEPS(INSN) (HDID (INSN)->forw_deps)
! #define INSN_RESOLVED_BACK_DEPS(INSN) (HDID (INSN)->resolved_back_deps)
! #define INSN_RESOLVED_FORW_DEPS(INSN) (HDID (INSN)->resolved_forw_deps)
! #define INSN_HARD_BACK_DEPS(INSN) (HDID (INSN)->hard_back_deps)
! #define INSN_SPEC_BACK_DEPS(INSN) (HDID (INSN)->spec_back_deps)
! #define CANT_MOVE(INSN) (HDID (INSN)->cant_move)
! #define CANT_MOVE_BY_LUID(LUID) (VEC_index (haifa_deps_insn_data_def, h_d_i_d, \
! LUID)->cant_move)
! ! ! #define INSN_PRIORITY(INSN) (HID (INSN)->priority)
! #define INSN_PRIORITY_STATUS(INSN) (HID (INSN)->priority_status)
#define INSN_PRIORITY_KNOWN(INSN) (INSN_PRIORITY_STATUS (INSN) > 0)
! #define TODO_SPEC(INSN) (HID (INSN)->todo_spec)
! #define DONE_SPEC(INSN) (HID (INSN)->done_spec)
! #define CHECK_SPEC(INSN) (HID (INSN)->check_spec)
! #define RECOVERY_BLOCK(INSN) (HID (INSN)->recovery_block)
! #define ORIG_PAT(INSN) (HID (INSN)->orig_pat)
/* INSN is either a simple or a branchy speculation check. */
! #define IS_SPECULATION_CHECK_P(INSN) \
! (sel_sched_p () ? sel_insn_is_speculation_check (INSN) : RECOVERY_BLOCK (INSN) != NULL)

There is no changelog entry for the new macro.



/* INSN is a speculation check that will simply reexecute the speculatively
scheduled instruction if the speculation fails. */
*************** enum SCHED_FLAGS {
*** 712,725 ****
DO_SPECULATION = USE_DEPS_LIST << 1,
SCHED_RGN = DO_SPECULATION << 1,
SCHED_EBB = SCHED_RGN << 1,
! /* Scheduler can possible create new basic blocks. Used for assertions. */
! NEW_BBS = SCHED_EBB << 1
};
enum SPEC_SCHED_FLAGS {
COUNT_SPEC_IN_CRITICAL_PATH = 1,
PREFER_NON_DATA_SPEC = COUNT_SPEC_IN_CRITICAL_PATH << 1,
! PREFER_NON_CONTROL_SPEC = PREFER_NON_DATA_SPEC << 1
};
#define NOTE_NOT_BB_P(NOTE) (NOTE_P (NOTE) && (NOTE_KIND (NOTE) \
--- 929,944 ----
DO_SPECULATION = USE_DEPS_LIST << 1,
SCHED_RGN = DO_SPECULATION << 1,
SCHED_EBB = SCHED_RGN << 1,
! /* Scheduler can possibly create new basic blocks. Used for assertions. */
! NEW_BBS = SCHED_EBB << 1,
! SEL_SCHED = NEW_BBS << 1
};

There is no changelog entry for SEL_SCHED.


enum SPEC_SCHED_FLAGS {
COUNT_SPEC_IN_CRITICAL_PATH = 1,
PREFER_NON_DATA_SPEC = COUNT_SPEC_IN_CRITICAL_PATH << 1,
! PREFER_NON_CONTROL_SPEC = PREFER_NON_DATA_SPEC << 1,
! SEL_SCHED_SPEC_DONT_CHECK_CONTROL = PREFER_NON_CONTROL_SPEC << 1

There is no changelog entry for SEL_SCHED_SPEC_DONT_CHECK_CONTROL.


};
#define NOTE_NOT_BB_P(NOTE) (NOTE_P (NOTE) && (NOTE_KIND (NOTE) \
*************** enum INSN_TRAP_CLASS
*** 809,830 ****
#define HAIFA_INLINE __inline
#endif
/* Functions in sched-deps.c. */
extern bool sched_insns_conditions_mutex_p (const_rtx, const_rtx);
extern void add_dependence (rtx, rtx, enum reg_note);
extern void sched_analyze (struct deps *, rtx, rtx);
- extern bool deps_pools_are_empty_p (void);
- extern void sched_free_deps (rtx, rtx, bool);
extern void init_deps (struct deps *);
extern void free_deps (struct deps *);
extern void init_deps_global (void);
extern void finish_deps_global (void);
! extern void init_dependency_caches (int);
! extern void free_dependency_caches (void);
! extern void extend_dependency_caches (int, bool);
extern dw_t get_dep_weak (ds_t, ds_t);
extern ds_t set_dep_weak (ds_t, ds_t, dw_t);
extern ds_t ds_merge (ds_t, ds_t);
extern void debug_ds (ds_t);
/* Functions in haifa-sched.c. */
--- 1028,1137 ----
#define HAIFA_INLINE __inline
#endif
+ struct sched_deps_info_def
+ {
+ /* Called when computing dependencies for a JUMP_INSN. This function
+ should store the set of registers that must be considered as set by
+ the jump in the regset. */
+ void (*compute_jump_reg_dependencies) (rtx, regset, regset, regset);
+ + /* Start analyzing insn. */
+ void (*start_insn) (rtx);
+ + /* Finish analyzing insn. */
+ void (*finish_insn) (void);
+ + /* Start analyzing insn LHS (Left Hand Side). */
+ void (*start_lhs) (rtx);
+ + /* Finish analyzing insn LHS. */
+ void (*finish_lhs) (void);
+ + /* Start analyzing insn RHS (Right Hand Side). */
+ void (*start_rhs) (rtx);
+ + /* Finish analyzing insn RHS. */
+ void (*finish_rhs) (void);
+ + /* Note set of the register. */
+ void (*note_reg_set) (int);
+ + /* Note clobber of the register. */
+ void (*note_reg_clobber) (int);
+ + /* Note use of the register. */
+ void (*note_reg_use) (int);
+ + /* Note memory dependence of type DS between MEM1 and MEM2 (which is
+ in the INSN2). */
+ void (*note_mem_dep) (rtx mem1, rtx mem2, rtx insn2, ds_t ds);
+ + /* Note a dependence of type DS from the INSN. */
+ void (*note_dep) (rtx insn, ds_t ds);
+ + /* Nonzero if we should use cselib for better alias analysis. This
+ must be 0 if the dependency information is used after sched_analyze
+ has completed, e.g. if we're using it to initialize state for successor
+ blocks in region scheduling. */
+ unsigned int use_cselib : 1;
+ + /* If set, generate links between instruction as DEPS_LIST.
+ Otherwise, generate usual INSN_LIST links. */
+ unsigned int use_deps_list : 1;
+ + /* Generate data and control speculative dependencies.
+ Requires USE_DEPS_LIST set. */
+ unsigned int generate_spec_deps : 1;
+ };
+ + extern struct sched_deps_info_def *sched_deps_info;

No logentry sched_deps_info


+ + /* Functions in sched-deps.c. */
extern bool sched_insns_conditions_mutex_p (const_rtx, const_rtx);
extern void add_dependence (rtx, rtx, enum reg_note);
extern void sched_analyze (struct deps *, rtx, rtx);
extern void init_deps (struct deps *);
extern void free_deps (struct deps *);
extern void init_deps_global (void);
extern void finish_deps_global (void);
! extern void deps_analyze_insn (struct deps *, rtx);
! extern void remove_from_deps (struct deps *, rtx);
! extern void add_forw_dep (dep_link_t);
! extern void compute_forward_dependences (rtx, rtx);
! extern enum DEPS_ADJUST_RESULT add_or_update_back_dep (rtx, rtx, ! enum reg_note, ds_t);
! extern void add_or_update_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
! extern void add_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
! extern void delete_back_forw_dep (dep_link_t);
! extern dw_t get_dep_weak_1 (ds_t, ds_t);

No log entries for deps_analyze_insn ... get_dep_weak_1


extern dw_t get_dep_weak (ds_t, ds_t);
extern ds_t set_dep_weak (ds_t, ds_t, dw_t);
+ extern dw_t estimate_dep_weak (rtx, rtx);
extern ds_t ds_merge (ds_t, ds_t);
+ extern ds_t ds_full_merge (ds_t, ds_t, rtx, rtx);
+ extern ds_t ds_max_merge (ds_t, ds_t);
+ extern dw_t ds_weak (ds_t);
+ extern ds_t ds_get_speculation_types (ds_t);
+ extern ds_t ds_get_max_dep_weak (ds_t);
+

No log entries for above functions marked by +.


...

+ extern void haifa_note_reg_set (int);
+ extern void haifa_note_reg_clobber (int);
+ extern void haifa_note_reg_use (int);
+ + extern void maybe_extend_reg_info_p (void);
+ + extern void deps_start_bb (struct deps *, rtx);
+ extern enum reg_note ds_to_dt (ds_t);
+

No log entries for above functions marked by +.



+ extern bool deps_pools_are_empty_p (void);
+ extern void sched_free_deps (rtx, rtx, bool);
+ extern void extend_dependency_caches (int, bool);
+ extern void debug_ds (ds_t);
/* Functions in haifa-sched.c. */
*************** extern int haifa_classify_insn (const_rt
*** 832,857 ****
extern void get_ebb_head_tail (basic_block, basic_block, rtx *, rtx *);
extern int no_real_insns_p (const_rtx, const_rtx);
- extern void rm_other_notes (rtx, rtx);
-

No log entries for rm_other_notes.


extern int insn_cost (rtx);
extern int dep_cost (dep_t);
extern int set_priorities (rtx, rtx);
! extern void schedule_block (basic_block *, int);
! extern void sched_init (void);
! extern void sched_finish (void);
extern int try_ready (rtx);
! extern void * xrecalloc (void *, size_t, size_t, size_t);
extern bool sched_insn_is_legitimate_for_speculation_p (const_rtx, ds_t);
extern void unlink_bb_notes (basic_block, basic_block);
extern void add_block (basic_block, basic_block);
extern rtx bb_note (basic_block);
! /* Functions in sched-rgn.c. */
extern void debug_dependencies (rtx, rtx);
/* sched-deps.c interface to walk, add, search, update, resolve, delete
and debug instruction dependencies. */
--- 1139,1245 ----
extern void get_ebb_head_tail (basic_block, basic_block, rtx *, rtx *);
extern int no_real_insns_p (const_rtx, const_rtx);
extern int insn_cost (rtx);
+ extern int dep_cost_1 (dep_t, dw_t);
extern int dep_cost (dep_t);
extern int set_priorities (rtx, rtx);
! extern void schedule_block (basic_block *);
! ! extern int cycle_issued_insns;
! extern int issue_rate;
! extern int dfa_lookahead;
! ! extern void ready_sort (struct ready_list *);
! extern rtx ready_element (struct ready_list *, int);
! extern rtx *ready_lastpos (struct ready_list *);
extern int try_ready (rtx);
! extern void sched_extend_ready_list (int);
! extern void sched_finish_ready_list (void);
! extern void sched_change_pattern (rtx, rtx);
! extern int sched_speculate_insn (rtx, ds_t, rtx *);
extern bool sched_insn_is_legitimate_for_speculation_p (const_rtx, ds_t);
extern void unlink_bb_notes (basic_block, basic_block);
extern void add_block (basic_block, basic_block);
extern rtx bb_note (basic_block);
+ extern void concat_note_lists (rtx, rtx *);

No log entries for above functions marked by + and !.


...
+ extern void sched_rgn_local_finish (void);

Missed logenttry


...

diff -cprNd -x .svn -x .hg trunk/gcc/sched-rgn.c sel-sched-branch/gcc/sched-rgn.c
*** trunk/gcc/sched-rgn.c Tue May 20 20:04:58 2008
--- sel-sched-branch/gcc/sched-rgn.c Thu May 29 18:28:30 2008
*************** along with GCC; see the file COPYING3. *** 64,153 ****
...
--- 64,132 ----
#include "cfglayout.h"
#include "params.h"
#include "sched-int.h"
+ #include "sel-sched.h"
+ #include "cselib.h"

That is missed from the changelog and Makefile.in. Please fix this.


Also, I don't understand why do you need cselib.h here.  Please,
explain the reason for this.

...

*************** debug_regions (void)
*** 423,450 ****
}
}
/* Build a single block region for each basic block in the function.
This allows for using the same code for interblock and basic block
scheduling. */
static void
! find_single_block_region (void)
{
! basic_block bb;
nr_regions = 0;
! FOR_EACH_BB (bb)
! {
! rgn_bb_table[nr_regions] = bb->index;
! RGN_NR_BLOCKS (nr_regions) = 1;
! RGN_BLOCKS (nr_regions) = nr_regions;
! RGN_DONT_CALC_DEPS (nr_regions) = 0;
! RGN_HAS_REAL_EBB (nr_regions) = 0;
! CONTAINING_RGN (bb->index) = nr_regions;
! BLOCK_TO_BB (bb->index) = 0;
! nr_regions++;
! }
}
/* Update number of blocks and the estimate for number of insns
--- 378,537 ----
...


/* Build a single block region for each basic block in the function.
This allows for using the same code for interblock and basic block
scheduling. */
static void
! find_single_block_region (bool ebbs_p)
{

Adding a new parameter is missed in the changelog.




! basic_block bb, ebb_start;
! int i = 0;
nr_regions = 0;
! if (ebbs_p) {
! int probability_cutoff;
! if (profile_info && flag_branch_probabilities)
! probability_cutoff = PARAM_VALUE (TRACER_MIN_BRANCH_PROBABILITY_FEEDBACK);
! else
! probability_cutoff = PARAM_VALUE (TRACER_MIN_BRANCH_PROBABILITY);
! probability_cutoff = REG_BR_PROB_BASE / 100 * probability_cutoff;
! ! FOR_EACH_BB (ebb_start)
! {
! RGN_NR_BLOCKS (nr_regions) = 0;
! RGN_BLOCKS (nr_regions) = i;
! RGN_DONT_CALC_DEPS (nr_regions) = 0;
! RGN_HAS_REAL_EBB (nr_regions) = 0;
! ! for (bb = ebb_start; ; bb = bb->next_bb)
! {
! edge e;
! edge_iterator ei;
! ! rgn_bb_table[i] = bb->index;
! RGN_NR_BLOCKS (nr_regions)++;
! CONTAINING_RGN (bb->index) = nr_regions;
! BLOCK_TO_BB (bb->index) = i - RGN_BLOCKS (nr_regions);
! i++;
! ! if (bb->next_bb == EXIT_BLOCK_PTR
! || LABEL_P (BB_HEAD (bb->next_bb)))
! break;
! ! FOR_EACH_EDGE (e, ei, bb->succs)
! if ((e->flags & EDGE_FALLTHRU) != 0)
! break;
! if (! e)
! break;
! if (e->probability <= probability_cutoff)
! break;
! }
! ! ebb_start = bb;
! nr_regions++;
! }
! }
! else
! FOR_EACH_BB (bb)
! {
! rgn_bb_table[nr_regions] = bb->index;
! RGN_NR_BLOCKS (nr_regions) = 1;
! RGN_BLOCKS (nr_regions) = nr_regions;
! RGN_DONT_CALC_DEPS (nr_regions) = 0;
! RGN_HAS_REAL_EBB (nr_regions) = 0;
! ! CONTAINING_RGN (bb->index) = nr_regions;
! BLOCK_TO_BB (bb->index) = 0;
! nr_regions++;
! }
! }
! ! /* Estimate number of the insns in the BB. */
! static int
! rgn_estimate_number_of_insns (basic_block bb)


This new function is missed from the changelog.


! {
! return INSN_LUID (BB_END (bb)) - INSN_LUID (BB_HEAD (bb));
}
/* Update number of blocks and the estimate for number of insns
*************** new_ready (rtx next, ds_t ts)
*** 2106,2129 ****
if (not_ex_free
/* We are here because is_exception_free () == false.
But we possibly can handle that with control speculation. */
! && (current_sched_info->flags & DO_SPECULATION)
! && (spec_info->mask & BEGIN_CONTROL))
! {
! ds_t new_ds;
! ! /* Add control speculation to NEXT's dependency type. */
! new_ds = set_dep_weak (ts, BEGIN_CONTROL, MAX_DEP_WEAK);
! ! /* Check if NEXT can be speculated with new dependency type. */
! if (sched_insn_is_legitimate_for_speculation_p (next, new_ds))
! /* Here we got new control-speculative instruction. */
! ts = new_ds;
! else
! /* NEXT isn't ready yet. */
! ts = (ts & ~SPECULATIVE) | HARD_DEP;
! }
else
- /* NEXT isn't ready yet. */
ts = (ts & ~SPECULATIVE) | HARD_DEP;
}
}
--- 2209,2219 ----
if (not_ex_free
/* We are here because is_exception_free () == false.
But we possibly can handle that with control speculation. */
! && sched_deps_info->generate_spec_deps
! && spec_info->mask & BEGIN_CONTROL)
! /* Here we got new control-speculative instruction. */
! ts = set_dep_weak (ts, BEGIN_CONTROL, MAX_DEP_WEAK);
else
ts = (ts & ~SPECULATIVE) | HARD_DEP;
}
}


These changes are missed from the changelog.


*************** compute_jump_reg_dependencies (rtx insn *** 2210,2219 ****
add_branch_dependences. */
}
/* Used in schedule_insns to initialize current_sched_info for scheduling
regions (or single basic blocks). */
! static struct sched_info region_sched_info =
{
init_ready_list,
can_schedule_ready_p,
--- 2300,2327 ----
add_branch_dependences. */
}
+ static struct common_sched_info_def rgn_common_sched_info;
+ + static const struct sched_deps_info_def rgn_const_sched_deps_info =
+ {
+ compute_jump_reg_dependencies,
+ NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
+ 0, 0, 0
+ };
+ + static const struct sched_deps_info_def rgn_const_sel_sched_deps_info =
+ {
+ compute_jump_reg_dependencies,
+ NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
+ 0, 0, 0
+ };
+ + static struct sched_deps_info_def rgn_sched_deps_info;
+

Comments for these variables, please.


/* Used in schedule_insns to initialize current_sched_info for scheduling
regions (or single basic blocks). */
! static const struct haifa_sched_info rgn_const_sched_info =
{
init_ready_list,
can_schedule_ready_p,
*************** static struct sched_info region_sched_in
*** 2222,2241 ****
rgn_rank,
rgn_print_insn,
contributes_to_priority,
- compute_jump_reg_dependencies,
NULL, NULL,
NULL, NULL,
! 0, 0, 0,
! add_remove_insn,
begin_schedule_ready,
- add_block1,
advance_target_bb,
- fix_recovery_cfg,
SCHED_RGN
};
/* Determine if PAT sets a CLASS_LIKELY_SPILLED_P register. */
static bool
--- 2330,2348 ----
rgn_rank,
rgn_print_insn,
contributes_to_priority,
NULL, NULL,
NULL, NULL,
! 0, 0,
! rgn_add_remove_insn,
begin_schedule_ready,
advance_target_bb,
SCHED_RGN
};
+ static struct haifa_sched_info rgn_sched_info;
+

Comments and changelog entry for the above variable.


/* Determine if PAT sets a CLASS_LIKELY_SPILLED_P register. */
static bool
*************** sets_likely_spilled_1 (rtx x, const_rtx *** 2258,2266 ****
*ret = true;
}
/* Add dependences so that branches are scheduled to run last in their
block. */
- static void
add_branch_dependences (rtx head, rtx tail)
{
--- 2365,2374 ----
*ret = true;
}
+ static int *ref_counts;
+

Comments and changelog entry for the above variable.


 /* Add dependences so that branches are scheduled to run last in their
    block.  */
 static void
 add_branch_dependences (rtx head, rtx tail)
 {
...

*************** compute_block_dependences (int bb)
*** 2540,2546 ****
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
sched_analyze (&tmp_deps, head, tail);
! add_branch_dependences (head, tail);
if (current_nr_blocks > 1)
propagate_deps (bb, &tmp_deps);
--- 2652,2670 ----
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
sched_analyze (&tmp_deps, head, tail);
! ! #if 0
! if (bb_ends_ebb_p (BASIC_BLOCK (BB_TO_BLOCK (bb))) ! && sched_deps_info->use_cselib)
! {
! cselib_finish ();
! cselib_init (true);
! }
! #endif !

Please, remove the code in #if 0 ... #end.


!   /* Selective scheduling handles control dependencies by itself.  */
!   if (!sel_sched_p ())
!     add_branch_dependences (head, tail);


No changelog entry for the new if.
if (current_nr_blocks > 1)
propagate_deps (bb, &tmp_deps);
*************** void debug_dependencies (rtx head, rtx t
*** 2642,2650 ****
INSN_UID (insn),
INSN_CODE (insn),
BLOCK_NUM (insn),
! sd_lists_size (insn, SD_LIST_BACK),
! INSN_PRIORITY (insn),
! insn_cost (insn));
if (recog_memoized (insn) < 0)
fprintf (sched_dump, "nothing");
--- 2766,2778 ----
INSN_UID (insn),
INSN_CODE (insn),
BLOCK_NUM (insn),
! sched_emulate_haifa_p ? -1 : sd_lists_size (insn, SD_LIST_BACK),
! (sel_sched_p () ? (sched_emulate_haifa_p ? -1
! : INSN_PRIORITY (insn))
! : INSN_PRIORITY (insn)),
! (sel_sched_p () ? (sched_emulate_haifa_p ? -1
! : insn_cost (insn))
! : insn_cost (insn)));
if (recog_memoized (insn) < 0)
fprintf (sched_dump, "nothing");

No log entry for the change.


...


*************** schedule_insns (void)
*** 2990,3011 ****
fprintf (sched_dump, "\n\n");
}
! /* Clean up. */
free (rgn_table);
free (rgn_bb_table);
free (block_to_bb);
free (containing_rgn);
! regstat_free_calls_crossed ();
bitmap_clear (&not_in_df);
! sched_finish ();
}
/* INSN has been added to/removed from current region. */
static void
! add_remove_insn (rtx insn, int remove_p)
{
if (!remove_p)
rgn_n_insns++;
--- 3009,3243 ----
fprintf (sched_dump, "\n\n");
}
! nr_regions = 0;
! free (rgn_table);
+ rgn_table = NULL;
+ free (rgn_bb_table);
+ rgn_bb_table = NULL;
+ free (block_to_bb);
+ block_to_bb = NULL;
+ free (containing_rgn);
+ containing_rgn = NULL;
! free (ebb_head);
! ebb_head = NULL;
! }
! ! /* Setup global variables like CURRENT_BLOCKS and CURRENT_NR_BLOCK to
! point to the region RGN. */
! void
! rgn_setup_region (int rgn)

No changelog for the new function.


! {
! int bb;
! ! /* Set variables for the current region. */
! current_nr_blocks = RGN_NR_BLOCKS (rgn);
! current_blocks = RGN_BLOCKS (rgn);
! ! /* EBB_HEAD is a region-scope structure. But we realloc it for
! each region to save time/memory/something else.
! See comments in add_block1, for what reasons we allocate +1 element. */
! ebb_head = xrealloc (ebb_head, (current_nr_blocks + 1) * sizeof (*ebb_head));
! for (bb = 0; bb <= current_nr_blocks; bb++)
! ebb_head[bb] = current_blocks + bb;
! }
! ! /* Compute instruction dependencies in region RGN. */
! void
! sched_rgn_compute_dependencies (int rgn)
! {

No changelog for the new function.


...

+ + /* Setup scheduler infos. */
+ void
+ rgn_setup_common_sched_info (void)
+ {

No changelog for the new function.



+ memcpy (&rgn_common_sched_info, &haifa_common_sched_info,
+ sizeof (rgn_common_sched_info));
+ + rgn_common_sched_info.fix_recovery_cfg = rgn_fix_recovery_cfg;
+ rgn_common_sched_info.add_block = rgn_add_block;
+ rgn_common_sched_info.estimate_number_of_insns
+ = rgn_estimate_number_of_insns;
+ rgn_common_sched_info.sched_pass_id = SCHED_RGN_PASS;
+ + common_sched_info = &rgn_common_sched_info;
+ }
+ + void
+ rgn_setup_sched_infos (void)
+ {

No changelog for the new function. Please, add a comment too.


+ if (!sel_sched_p ())
+ memcpy (&rgn_sched_deps_info, &rgn_const_sched_deps_info,
+ sizeof (rgn_sched_deps_info));
+ else
+ memcpy (&rgn_sched_deps_info, &rgn_const_sel_sched_deps_info,
+ sizeof (rgn_sched_deps_info));
+ + sched_deps_info = &rgn_sched_deps_info;
+ + memcpy (&rgn_sched_info, &rgn_const_sched_info, sizeof (rgn_sched_info));
+ current_sched_info = &rgn_sched_info;
+ }
+ + /* The one entry point in this file. */
+ void
+ schedule_insns (void)
+ {
+ int rgn;
+ + /* Taking care of this degenerate case makes the rest of
+ this code simpler. */
+ if (n_basic_blocks == NUM_FIXED_BLOCKS)
+ return;
+ + rgn_setup_common_sched_info ();
+ rgn_setup_sched_infos ();
+ + haifa_sched_init ();
+ sched_rgn_init (reload_completed);
+ + bitmap_initialize (&not_in_df, 0);
bitmap_clear (&not_in_df);
! /* Schedule every region in the subroutine. */
! for (rgn = 0; rgn < nr_regions; rgn++)
! if (dbg_cnt (sched_region))
! schedule_region (rgn);
! ! /* Clean up. */
! sched_rgn_finish ();
! bitmap_clear (&not_in_df);
! ! haifa_sched_finish ();
}
/* INSN has been added to/removed from current region. */
static void
! rgn_add_remove_insn (rtx insn, int remove_p)
{

No changelog for the new function.


   if (!remove_p)
     rgn_n_insns++;

...


*************** extend_regions (void)
*** 3031,3061 ****
containing_rgn = XRESIZEVEC (int, containing_rgn, last_basic_block);
}
/* BB was added to ebb after AFTER. */
static void
! add_block1 (basic_block bb, basic_block after)
{
extend_regions ();
- bitmap_set_bit (&not_in_df, bb->index);
if (after == 0 || after == EXIT_BLOCK_PTR)
{
! int i;
! ! i = RGN_BLOCKS (nr_regions);
! /* I - first free position in rgn_bb_table. */
! ! rgn_bb_table[i] = bb->index;
! RGN_NR_BLOCKS (nr_regions) = 1;
! RGN_DONT_CALC_DEPS (nr_regions) = after == EXIT_BLOCK_PTR;
! RGN_HAS_REAL_EBB (nr_regions) = 0;
! CONTAINING_RGN (bb->index) = nr_regions;
! BLOCK_TO_BB (bb->index) = 0;
! ! nr_regions++;
! ! RGN_BLOCKS (nr_regions) = i + 1;
}
else
{ --- 3263,3299 ----
containing_rgn = XRESIZEVEC (int, containing_rgn, last_basic_block);
}
+ void
+ rgn_make_new_region_out_of_new_block (basic_block bb)

Please, add a comment.



+ {
+ int i;
+ + i = RGN_BLOCKS (nr_regions);
+ /* I - first free position in rgn_bb_table. */
+ + rgn_bb_table[i] = bb->index;
+ RGN_NR_BLOCKS (nr_regions) = 1;
+ RGN_HAS_REAL_EBB (nr_regions) = 0;
+ RGN_DONT_CALC_DEPS (nr_regions) = 0;
+ CONTAINING_RGN (bb->index) = nr_regions;
+ BLOCK_TO_BB (bb->index) = 0;
+ + nr_regions++;
+ + RGN_BLOCKS (nr_regions) = i + 1;
+ }
+ /* BB was added to ebb after AFTER. */
static void
! rgn_add_block (basic_block bb, basic_block after)
{
extend_regions ();
bitmap_set_bit (&not_in_df, bb->index);
if (after == 0 || after == EXIT_BLOCK_PTR)
{
! rgn_make_new_region_out_of_new_block (bb);
! RGN_DONT_CALC_DEPS (nr_regions - 1) = (after == EXIT_BLOCK_PTR);
}
else
{
...

*************** static unsigned int
*** 3174,3180 ****
 rest_of_handle_sched (void)
 {
 #ifdef INSN_SCHEDULING
!   schedule_insns ();
 #endif
   return 0;
 }
--- 3412,3422 ----
 rest_of_handle_sched (void)
 {
 #ifdef INSN_SCHEDULING
!   if (flag_selective_scheduling
!       && ! maybe_skip_selective_scheduling ())
!     run_selective_scheduling ();
!   else
!     schedule_insns ();
 #endif
   return 0;
 }

There is no changelog entry for the change.


*************** static unsigned int
*** 3195,3206 ****
 rest_of_handle_sched2 (void)
 {
 #ifdef INSN_SCHEDULING
!   /* Do control and data sched analysis again,
!      and write some more of the results to dump file.  */
!   if (flag_sched2_use_superblocks || flag_sched2_use_traces)
!     schedule_ebbs ();
   else
!     schedule_insns ();
 #endif
   return 0;
 }
--- 3437,3454 ----
 rest_of_handle_sched2 (void)
 {
 #ifdef INSN_SCHEDULING
!   if (flag_selective_scheduling2
!       && ! maybe_skip_selective_scheduling ())
!     run_selective_scheduling ();
   else
!     {
!       /* Do control and data sched analysis again,
! 	 and write some more of the results to dump file.  */
!       if (flag_sched2_use_superblocks || flag_sched2_use_traces)
! 	schedule_ebbs ();
!       else
! 	schedule_insns ();
!     }
 #endif
   return 0;
 }

There is no changelog entry for the change.


...

diff -cprNd -x .svn -x .hg trunk/gcc/sched-vis.c sel-sched-branch/gcc/sched-vis.c
*** trunk/gcc/sched-vis.c Fri May 30 17:32:06 2008
--- sel-sched-branch/gcc/sched-vis.c Fri Jan 11 16:37:10 2008
*************** along with GCC; see the file COPYING3. *** 29,41 ****
#include "hard-reg-set.h"
#include "basic-block.h"
#include "real.h"
#include "sched-int.h"
#include "tree-pass.h"
static char *safe_concat (char *, char *, const char *);
- static void print_exp (char *, const_rtx, int);
- static void print_value (char *, const_rtx, int);
- static void print_pattern (char *, const_rtx, int);
#define BUF_LEN 2048
--- 29,39 ----
#include "hard-reg-set.h"
#include "basic-block.h"
#include "real.h"
+ #include "insn-attr.h"

That is missed from the changelog and Makefile.in. Please fix this.



#include "sched-int.h"
#include "tree-pass.h"
static char *safe_concat (char *, char *, const char *);
#define BUF_LEN 2048

...


*************** dump_insn_slim (FILE *f, rtx x)
*** 710,718 ****
char t[BUF_LEN + 32];
rtx note;
! print_insn (t, x, 1);
! fputs (t, f);
putc ('\n', f);
if (INSN_P (x) && REG_NOTES (x))
for (note = REG_NOTES (x); note; note = XEXP (note, 1))
{
--- 716,724 ----
char t[BUF_LEN + 32];
rtx note;
! dump_insn_slim_1 (f, x);
putc ('\n', f);
+ if (INSN_P (x) && REG_NOTES (x))
for (note = REG_NOTES (x); note; note = XEXP (note, 1))
{

There is no changelog entry for the change.



...


diff -cprNd -x .svn -x .hg trunk/gcc/target-def.h sel-sched-branch/gcc/target-def.h
*** trunk/gcc/target-def.h Fri May 30 17:32:06 2008
--- sel-sched-branch/gcc/target-def.h Wed Feb 27 20:10:35 2008
***************
*** 316,327 ****
#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD 0
#define TARGET_SCHED_DFA_NEW_CYCLE 0
#define TARGET_SCHED_IS_COSTLY_DEPENDENCE 0
#define TARGET_SCHED_H_I_D_EXTENDED 0
#define TARGET_SCHED_SPECULATE_INSN 0
#define TARGET_SCHED_NEEDS_BLOCK_P 0
! #define TARGET_SCHED_GEN_CHECK 0
#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC 0
#define TARGET_SCHED_SET_SCHED_FLAGS 0
#define TARGET_SCHED_SMS_RES_MII 0
#define TARGET_SCHED \
--- 316,336 ----
#define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD 0
#define TARGET_SCHED_DFA_NEW_CYCLE 0
#define TARGET_SCHED_IS_COSTLY_DEPENDENCE 0
+ #define TARGET_SCHED_ADJUST_COST_2 0

The above macro is missed from the changelog.


 #define TARGET_SCHED_H_I_D_EXTENDED 0
+ #define TARGET_SCHED_ALLOC_SCHED_CONTEXT 0
+ #define TARGET_SCHED_INIT_SCHED_CONTEXT 0
+ #define TARGET_SCHED_SET_SCHED_CONTEXT 0
+ #define TARGET_SCHED_CLEAR_SCHED_CONTEXT 0
+ #define TARGET_SCHED_FREE_SCHED_CONTEXT 0
 #define TARGET_SCHED_SPECULATE_INSN 0
 #define TARGET_SCHED_NEEDS_BLOCK_P 0
! #define TARGET_SCHED_GEN_SPEC_CHECK 0

Renaming of the above macro is missed from the changelog.


 #define TARGET_SCHED_FIRST_CYCLE_MULTIPASS_DFA_LOOKAHEAD_GUARD_SPEC 0
 #define TARGET_SCHED_SET_SCHED_FLAGS 0
+ #define TARGET_SCHED_GET_INSN_SPEC_DS 0
+ #define TARGET_SCHED_GET_INSN_CHECKED_DS 0
+ #define TARGET_SCHED_SKIP_RTX_P 0

The three above macro are missed from the changelog.


....

#define TARGET_VECTORIZE_BUILTIN_MASK_FOR_LOAD 0
diff -cprNd -x .svn -x .hg trunk/gcc/target.h sel-sched-branch/gcc/target.h
*** trunk/gcc/target.h Fri May 30 17:32:04 2008
--- sel-sched-branch/gcc/target.h Wed Feb 27 19:59:31 2008
*************** struct gcc_target
*** 354,364 ****
the second insn (second parameter). */
bool (* is_costly_dependence) (struct _dep *_dep, int, int);
/* The following member value is a pointer to a function called
by the insn scheduler. This hook is called to notify the backend
that new instructions were emitted. */
void (* h_i_d_extended) (void);
! /* The following member value is a pointer to a function called
by the insn scheduler.
The first parameter is an instruction, the second parameter is the type
--- 354,383 ----
the second insn (second parameter). */
bool (* is_costly_dependence) (struct _dep *_dep, int, int);
+ /* Given the current cost, COST, of an insn, INSN, calculate and
+ return a new cost based on its relationship to DEP_INSN through the
+ dependence of type DEP_TYPE. The default is to make no adjustment. */
+ int (* adjust_cost_2) (rtx insn, int, rtx dep_insn, int cost, int dw);
+ /* The following member value is a pointer to a function called
by the insn scheduler. This hook is called to notify the backend
that new instructions were emitted. */
void (* h_i_d_extended) (void);
! ! /* Next 6 functions are for multi-point scheduling. */
!
              ^
I see only 5 functions.

You missed comments for alloc_sched_context, clear_sched_context,
free_sched_context.


! void *(* alloc_sched_context) (void);
! ! /* Fills the context from the local machine scheduler context. */
! void (* init_sched_context) (void *, bool);
! ! /* Sets local machine scheduler context to a saved value. */
! void (* set_sched_context) (void *);
! ! void (* clear_sched_context) (void *);
! ! void (* free_sched_context) (void *);
! /* The following member value is a pointer to a function called
by the insn scheduler.
The first parameter is an instruction, the second parameter is the type
*************** struct gcc_target
*** 386,392 ****
simple check). If the mutation of the check is requested (e.g. from
ld.c to chk.a), the third parameter is true - in this case the first
parameter is the previous check. */
! rtx (* gen_check) (rtx, rtx, bool);
/* The following member value is a pointer to a function controlling
what insns from the ready insn queue will be considered for the
--- 405,411 ----
simple check). If the mutation of the check is requested (e.g. from
ld.c to chk.a), the third parameter is true - in this case the first
parameter is the previous check. */
! rtx (* gen_spec_check) (rtx, rtx, int);
/* The following member value is a pointer to a function controlling
what insns from the ready insn queue will be considered for the

The hook renaming is missed from the changelog.



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]