This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[sel-sched]: Committed patch to extend support for ia64 speculation and various cleanups


Hello,

This patch extends support for ia64 speculation to generate speculation checks thus making control speculation safe. Large portion of the patch is devoted to cleaning up CFG manipulation routines used by schedulers. As every control speculation triggers modification of control flow those routines should have been fixed to prevent CFG blowout with lots of empty blocks.

Committed to sel-sched-branch.


Thanks,


Maxim
2007-06-14  Maxim Kuvyrkov  <mkuvyrkov@ispras.ru>

	Extend support for ia64 speculation to generate branchy checks.
	Move data sets to per basic block data structures.
	
	* sched-ebb.c (begin_schedule_ready): Update.
	
	* rtlanal.c (may_trap_p_1): Fix bug.
	
	* cfghooks.c (get_cfg_hooks, set_cfg_hooks): New functions.
	
	* cfghooks.h (get_cfg_hooks, set_cfg_hooks): Declare.
	
	* haifa-sched.c (restore_other_notes): Fix bug.
	(haifa_sched_init): Initialize hooks.
	(haifa_sched_finish): Finalize hooks.
	(init_before_recovery): Update.
	(create_recovery_block): Make global.  Rename to
	sched_create_recovery_block ().  Update.
	(sched_create_recovery_edges): Separate cfg manipulation code from
	create_check_block_twin () into new function.
	(create_check_block_twin): Update.
	(sched_speculate_insn): Handle be-in speculations.
	(insn_luid): New debug function.
	(sched_init_only_bb): New hook.
	(haifa_init_only_bb): Make static.
	(sched_split_block): New hook.
	(sched_split_block_1): New function.
	(sched_create_empty_bb): New hook.
	(sched_create_empty_bb_1): New function.
	
	* sel-sched.c (old_create_basic_block, old_delete_basic_block): Remove.
	(create_insn_rtx_with_rhs, replace_src_with_reg_ok_p): Update.
	(replace_dest_with_reg_ok_p, create_insn_rtx_with_lhs): Ditto.
	(create_speculation_check_insn_rtx): Rename to
	create_speculation_check ().  Rewrite to handle branchy speculation
	checks.
	(apply_spec_to_expr, un_speculate, undo_transformations): Update.
	(compute_av_set, compute_lv_set, update_data_sets): Move data sets
	to per basic block data structures.
	(find_used_regs_1, gen_insn_from_expr_after): Update.
	(generate_bookkeeping_insn, fill_insns, move_op): Ditto.
	(old_rtl_hooks): Remove.
	(sel_region_init): Move initialization of hooks to
	sel_register_rtl_hooks () and sel_register_cfg_hooks ().  Update.
	(sel_region_finish): Make sure lv sets at region entries are valid.
	Update.
	(sel_sched_region_1, sel_global_init, sel_global_finish): Update.
	
	* sel-sched-ir.c (sel_global_bb_info, sel_region_bb_info): New vector
	variables.
	(sel_bb_info): Remove.
	(sel_extend_global_bb_info): New function.
	(extend_region_bb_info, extend_bb_info): New static functions.
	(sel_finish_global_bb_info): New function.
	(finish_region_bb_info): New static functions.
	(init_fences, new_fences_add): Update.
	(nop_vinsn): New static variable.
	(get_nop_from_pool, return_nop_to_pool, free_nop_pool): Update.
	(sel_rtx_equal_p): New static function.
	(vinsn_equal_p): Use it.
	(sel_gen_insn_from_rtx_after): Update.
	(init_insn_force_unique_p): New static variable.
	(sel_gen_recovery_insn_from_rtx_after): New function.
	(vinsns_correlate_as_rhses_p, merge_expr_data, merge_expr): Update.
	(sel_cfg_note_p): Remove.
	(init_global_and_expr_for_bb): New static function.
	(init_global_and_expr_for_insn, sel_init_global_and_expr): Update.
	(finish_global_and_expr_insn_1): Remove.
	(finish_global_and_expr_for_bb): New static function.
	(finish_global_and_expr_insn_2): Rename to
	finish_global_and_expr_insn ().  Update.
	(sel_finish_global_and_expr): Update.
	(has_dependence_note_reg_use): Handle be-in speculations.
	(bookkeeping_can_be_created_if_moved_through): Update.
	(insn_is_the_only_one_in_bb_p): New static function.
	(sched_sel_remove_insn): Rename to sel_remove_insn ().  Update.
	(transfer_data_sets): Remove.
	(get_seqno_of_a_pred): Update.
	(finish_insn): Rename to finish_insns ().
	(sel_rtl_insn_added): Update.
	(orig_rtl_hooks, sel_rtl_hooks): New static variable.
	(sel_register_rtl_hooks, sel_unregister_rtl_hooks): New functions.
	(empty_vinsn): Remove.
	(insn_init_create_new_vinsn_p): New static variable.
	(set_insn_init, init_insn, init_simplejump): Update.
	(insn_init_move_lv_set_if_bb_header): Remove.
	(sel_init_new_insns): Update.
	(init_lv_set_for_insn): Rename to init_lv_set ().  Update.
	(init_lv_sets): Update.
	(release_lv_set_for_insn): Rename to free_lv_set ().  Update.
	(free_lv_sets): Update.
	(init_invalid_lv_set, init_invalid_av_set, init_invalid_data_sets): New
	static functions.
	(free_av_set, free_data_sets, exchange_lv_sets, exchange_av_sets):
	Ditto.
	(exchange_data_sets): Ditto.
	(get_av_set, get_av_level): New functions.
	(sel_bb_header_1): Remove.
	(sel_bb_header): Rename to sel_bb_head ().  Update.
	(sel_bb_header_p): Rename to sel_bb_head_p ().  Update.
	(sel_bb_empty_p_1): Remove.
	(sel_bb_empty_p, sel_bb_end): Update.
	(extend_bb): Remove.
	(sel_init_bbs): Update.
	(num_preds_gt_1): Rename to sel_num_cfg_preds_gt_1 ().  Update.
	(rtx_vec_t): New typedef.
	(bb_note_pool): New vector variable.
	(return_bb_to_pool, get_bb_note_from_pool, free_bb_note_pool): New
	static functions.
	(sel_add_or_remove_bb): Make static.  Update.
	(move_bb_info): New static function.
	(sel_remove_empty_bb): Update.
	(remove_empty_bb): New static function.
	(orig_cfg_hooks): New static variable.
	(sel_init_only_bb): New static function.
	(sel_split_block): Make static.  Update.
	(sel_split_edge): Update.
	(sel_create_empty_bb, sel_create_recovery_block): New static function.
	(sel_redirect_edge_force): Rename to
	sel_redirect_edge_and_branch_force ().  Update.
	(sel_redirect_edge_and_branch): Update.
	(sel_cfg_hooks): New static variable.
	(sel_register_cfg_hooks, sel_unregister_cfg_hooks): New functions.
	(create_insn_rtx_from_pattern_1, create_insn_rtx_from_pattern): Update.
	(create_copy_of_insn_rtx, setup_nop_and_exit_insns): Ditto.
	(setup_empty_vinsn): Rename to setup_nop_vinsn ().  Update.
	(free_empty_vinsn): Rename to free_nop_vinsn.  Update.
	(sel_add_loop_preheader, sel_is_loop_preheader_p): Update.
	(insn_sid): New debug function.
	
	* sel-sched-ir.h: Update.	
	(expr_equal_p): Remove.
	(struct _sel_insn_data: ws_level, spec_checked_ds): New fields.
	(struct _sel_insn_rtx_data: lv): Remove field.
	(struct _sel_global_bb_info, struct _sel_region_bb_info): New types.
	(get_all_loop_exits, _succ_iter_cond, _eligible_successor_edge_p):
	Update.
	
	* sel-sched-dump.c (dump_vinsn_flags, sel_dump_cfg_insn): Update.
	(sel_dump_cfg_2): Ditto.
	
	* sel-sched-dump.h: Update.

	* sched-deps.c (ds_max_merge): Update.

	* sched-int.h: Update.

	* sched-rgn.c (rgn_make_new_region_out_of_new_block): New function.
	(rgn_add_block): Update.

	* sched-rgn.h: Update.

	* config/ia64/ia64.c (insn_can_be_in_speculative_p): New static
	function.
	(ia64_speculate_insn, ia64_needs_block_p): Support branchy checks
	during selective scheduling.

	* cfgrtl.c (create_basic_block_structure): Update.
----------------------------------------------------------------------
r28697:  cold | 2007-06-09 10:49:31 +0400

* Merge sel-chk into sel-sched-dev.
----------------------------------------------------------------------
--- gcc-local/sel-sched-dev/gcc/sched-ebb.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sched-ebb.c	(revision 28697)
@@ -253,7 +253,7 @@ begin_schedule_ready (rtx insn, rtx last
       gcc_assert (current_sched_info->next_tail);
 
       /* Append new basic block to the end of the ebb.  */
-      haifa_init_only_bb (bb, last_bb);
+      sched_init_only_bb (bb, last_bb);
       gcc_assert (last_bb == bb);
     }
 }
--- gcc-local/sel-sched-dev/gcc/rtlanal.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/rtlanal.c	(revision 28697)
@@ -2093,7 +2093,7 @@ may_trap_p_1 (rtx x, unsigned flags)
 
   if (code == UNSPEC
       && (targetm.sched.skip_rtx_p == NULL
-	  || !targetm.sched.skip_rtx_p (x)))
+	  || targetm.sched.skip_rtx_p (x)))
     /* Support ia64 speculation.  */
     return may_trap_p_1 (XVECEXP (x, 0, 0), flags);
 
--- gcc-local/sel-sched-dev/gcc/cfghooks.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/cfghooks.c	(revision 28697)
@@ -56,6 +56,18 @@ tree_register_cfg_hooks (void)
   cfg_hooks = &tree_cfg_hooks;
 }
 
+struct cfg_hooks
+get_cfg_hooks (void)
+{
+  return *cfg_hooks;
+}
+
+void
+set_cfg_hooks (struct cfg_hooks new_cfg_hooks)
+{
+  *cfg_hooks = new_cfg_hooks;
+}
+
 /* Returns current ir type.  */
 
 enum ir_type
--- gcc-local/sel-sched-dev/gcc/cfghooks.h	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/cfghooks.h	(revision 28697)
@@ -186,5 +186,7 @@ extern enum ir_type current_ir_type (voi
 extern void rtl_register_cfg_hooks (void);
 extern void cfg_layout_rtl_register_cfg_hooks (void);
 extern void tree_register_cfg_hooks (void);
+extern struct cfg_hooks get_cfg_hooks (void);
+extern void set_cfg_hooks (struct cfg_hooks);
 
 #endif  /* GCC_CFGHOOKS_H */
--- gcc-local/sel-sched-dev/gcc/haifa-sched.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/haifa-sched.c	(revision 28697)
@@ -509,7 +509,6 @@ static void process_insn_depend_be_in_sp
 static void begin_speculative_block (rtx);
 static void add_to_speculative_block (rtx);
 static void init_before_recovery (void);
-static basic_block create_recovery_block (void);
 static void create_check_block_twin (rtx, bool);
 static void fix_recovery_deps (basic_block);
 static void haifa_change_pattern (rtx, rtx);
@@ -1411,13 +1410,17 @@ restore_other_notes (rtx head, basic_blo
 	  set_block_for_insn (note_head, head_bb);
 	  note_head = PREV_INSN (note_head);
 	}
-      /* In the above cycle we've missed this note:  */
+      /* In the above cycle we've missed this note.  */
       set_block_for_insn (note_head, head_bb);
 
       PREV_INSN (note_head) = PREV_INSN (head);
       NEXT_INSN (PREV_INSN (head)) = note_head;
       PREV_INSN (head) = note_list;
       NEXT_INSN (note_list) = head;
+
+      if (BLOCK_FOR_INSN (head) != head_bb)
+	BB_END (head_bb) = note_list;
+
       head = note_head;
     }
 
@@ -2717,6 +2720,8 @@ sched_init (void)
   curr_state = xmalloc (dfa_state_size);
 }
 
+static void haifa_init_only_bb (basic_block, basic_block);
+
 /* Initialize data structures specific to the Haifa scheduler.  */
 void
 haifa_sched_init (void)
@@ -2754,6 +2759,10 @@ haifa_sched_init (void)
     VEC_free (basic_block, heap, bbs);
   }
 
+  sched_init_only_bb = haifa_init_only_bb;
+  sched_split_block = sched_split_block_1;
+  sched_create_empty_bb = sched_create_empty_bb_1;
+
 #ifdef ENABLE_CHECKING
   /* This is used preferably for finding bugs in check_cfg () itself.
      We must call sched_bbs_init () before check_cfg () because check_cfg ()
@@ -2769,6 +2778,10 @@ haifa_sched_init (void)
 void
 haifa_sched_finish (void)
 {
+  sched_create_empty_bb = NULL;
+  sched_split_block = NULL;
+  sched_init_only_bb = NULL;
+
   if (spec_info && spec_info->dump)
     {
       char c = reload_completed ? 'a' : 'b';
@@ -3506,10 +3519,10 @@ init_before_recovery (void)
       basic_block single, empty;
       rtx x, label;
 
-      single = create_empty_bb (last);
-      empty = create_empty_bb (single);            
+      single = sched_create_empty_bb (last);
+      empty = sched_create_empty_bb (single);
 
-      single->count = last->count;     
+      single->count = last->count;
       empty->count = last->count;
       single->frequency = last->frequency;
       empty->frequency = last->frequency;
@@ -3529,8 +3542,8 @@ init_before_recovery (void)
           
       emit_barrier_after (x);
 
-      haifa_init_only_bb (empty, NULL);
-      haifa_init_only_bb (single, NULL);
+      sched_init_only_bb (empty, NULL);
+      sched_init_only_bb (single, NULL);
 
       before_recovery = single;
 
@@ -3544,8 +3557,8 @@ init_before_recovery (void)
 }
 
 /* Returns new recovery block.  */
-static basic_block
-create_recovery_block (void)
+basic_block
+sched_create_recovery_block (void)
 {
   rtx label;
   rtx barrier;
@@ -3553,8 +3566,7 @@ create_recovery_block (void)
   
   added_recovery_block_p = true;
 
-  if (!before_recovery)
-    init_before_recovery ();
+  init_before_recovery ();
 
   barrier = get_last_bb_insn (before_recovery);
   gcc_assert (BARRIER_P (barrier));
@@ -3578,6 +3590,57 @@ create_recovery_block (void)
   return rec;
 }
 
+/* Create edges: FIRST_BB -> REC; FIRST_BB -> SECOND_BB; REC -> SECOND_BB
+   and emit necessary jumps.  */
+void
+sched_create_recovery_edges (basic_block first_bb, basic_block rec,
+			     basic_block second_bb)
+{
+  rtx label;
+  rtx jump;
+  edge e;
+  int edge_flags;
+
+  /* This is fixing of incoming edge.  */
+  /* ??? Which other flags should be specified?  */      
+  if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
+    /* Partition type is the same, if it is "unpartitioned".  */
+    edge_flags = EDGE_CROSSING;
+  else
+    edge_flags = 0;
+      
+  e = make_edge (first_bb, rec, edge_flags);
+
+  gcc_assert (NOTE_INSN_BASIC_BLOCK_P (BB_HEAD (second_bb)));
+  label = block_label (second_bb);
+
+  jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
+  JUMP_LABEL (jump) = label;
+  LABEL_NUSES (label)++;
+
+  if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
+    /* Partition type is the same, if it is "unpartitioned".  */
+    {
+      /* Rewritten from cfgrtl.c.  */
+      if (flag_reorder_blocks_and_partition
+	  && targetm.have_named_sections
+	  /*&& !any_condjump_p (jump)*/)
+	/* any_condjump_p (jump) == false.
+	   We don't need the same note for the check because
+	   any_condjump_p (check) == true.  */
+	{
+	  REG_NOTES (jump) = gen_rtx_EXPR_LIST (REG_CROSSING_JUMP,
+						NULL_RTX,
+						REG_NOTES (jump));
+	}
+      edge_flags = EDGE_CROSSING;
+    }
+  else
+    edge_flags = 0;  
+
+  make_single_succ_edge (rec, second_bb, edge_flags);  
+}
+
 /* This function creates recovery code for INSN.  If MUTATE_P is nonzero,
    INSN is a simple check, that should be converted to branchy one.  */
 static void
@@ -3605,7 +3668,7 @@ create_check_block_twin (rtx insn, bool 
   /* Create recovery block.  */
   if (mutate_p || targetm.sched.needs_block_p (todo_spec))
     {
-      rec = create_recovery_block ();
+      rec = sched_create_recovery_block ();
       label = BB_HEAD (rec);
     }
   else
@@ -3687,58 +3750,17 @@ create_check_block_twin (rtx insn, bool 
     {
       basic_block first_bb, second_bb;
       rtx jump;
-      edge e;
-      int edge_flags;
 
       first_bb = BLOCK_FOR_INSN (check);
-      e = split_block (first_bb, check);
-      /* split_block emits note if *check == BB_END.  Probably it 
-	 is better to rip that note off.  */
-      gcc_assert (e->src == first_bb);
-      second_bb = e->dest;
-
-      /* This is fixing of incoming edge.  */
-      /* ??? Which other flags should be specified?  */      
-      if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
-	/* Partition type is the same, if it is "unpartitioned".  */
-	edge_flags = EDGE_CROSSING;
-      else
-	edge_flags = 0;
-      
-      e = make_edge (first_bb, rec, edge_flags);
+      second_bb = sched_split_block (first_bb, check);
 
-      haifa_init_only_bb (second_bb, first_bb);
-      
-      gcc_assert (NOTE_INSN_BASIC_BLOCK_P (BB_HEAD (second_bb)));
-      label = block_label (second_bb);
-      jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
-      JUMP_LABEL (jump) = label;
-      LABEL_NUSES (label)++;
-      haifa_init_insn (jump);
+      sched_create_recovery_edges (first_bb, rec, second_bb);
 
-      if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
-	/* Partition type is the same, if it is "unpartitioned".  */
-	{
-	  /* Rewritten from cfgrtl.c.  */
-	  if (flag_reorder_blocks_and_partition
-	      && targetm.have_named_sections
-	      /*&& !any_condjump_p (jump)*/)
-	    /* any_condjump_p (jump) == false.
-	       We don't need the same note for the check because
-	       any_condjump_p (check) == true.  */
-	    {
-	      REG_NOTES (jump) = gen_rtx_EXPR_LIST (REG_CROSSING_JUMP,
-						    NULL_RTX,
-						    REG_NOTES (jump));
-	    }
-	  edge_flags = EDGE_CROSSING;
-	}
-      else
-	edge_flags = 0;  
-      
-      make_single_succ_edge (rec, second_bb, edge_flags);  
-      
-      haifa_init_only_bb (rec, EXIT_BLOCK_PTR);
+      sched_init_only_bb (second_bb, first_bb);      
+      sched_init_only_bb (rec, EXIT_BLOCK_PTR);
+
+      jump = BB_END (rec);
+      haifa_init_insn (jump);
     }
 
   /* Move backward dependences from INSN to CHECK and 
@@ -3968,16 +3990,11 @@ sched_speculate_insn (rtx insn, ds_t req
       && side_effects_p (PATTERN (insn)))
     return -1;
   
-  if (request & BE_IN_SPEC)
-    {            
-      if (may_trap_p (PATTERN (insn)))
-        return -1;
-      
-      if (!(request & BEGIN_SPEC))
-        return 0;
-    }
+  if ((request & BE_IN_SPEC)
+      && may_trap_p (PATTERN (insn)))
+    return -1;
 
-  request &= BEGIN_SPEC;
+  request &= SPECULATIVE;
 
   return targetm.sched.speculate_insn (insn, request, new_pat);
 }
@@ -4565,7 +4582,7 @@ sched_init_bb (basic_block bb)
       /* Initialize GLAT (global_live_at_{start, end}) structures.
 	 GLAT structures are used to substitute global_live_{start, end}
 	 regsets during scheduling.  This is necessary to use such functions as
-	 split_block (), as they assume consistency of register live
+	 sched_split_block (), as they assume consistency of register live
 	 information.  */
       glat_start[bb->index] = bb->il.rtl->global_live_at_start;
       glat_end[bb->index] = bb->il.rtl->global_live_at_end;
@@ -4797,6 +4814,13 @@ sched_finish_luids (void)
   sched_max_luid = 1;
 }
 
+/* Return logical uid of INSN.  Helpful while debugging.  */
+int
+insn_luid (rtx insn)
+{
+  return INSN_LUID (insn);
+}
+
 /* Extend per insn data in the target.  */
 void
 sched_extend_target (void)
@@ -4868,8 +4892,10 @@ haifa_init_insn (rtx insn)
   haifa_init_h_i_d (NULL, NULL, NULL, insn);
 }
 
+void (* sched_init_only_bb) (basic_block, basic_block);
+
 /* Init data for the new basic block BB which comes after AFTER.  */
-void
+static void
 haifa_init_only_bb (basic_block bb, basic_block after)
 {
   gcc_assert (bb != NULL
@@ -4884,4 +4910,33 @@ haifa_init_only_bb (basic_block bb, basi
     common_sched_info->add_block (bb, after);
 }
 
+/* Split block function.  Different schedulers might use different functions
+   to handle their internal data consistent.  */
+basic_block (* sched_split_block) (basic_block, rtx);
+
+/* A generic version of sched_split_block ().  */
+basic_block
+sched_split_block_1 (basic_block first_bb, rtx after)
+{
+  edge e;
+
+  e = split_block (first_bb, after);
+  gcc_assert (e->src == first_bb);
+
+  /* sched_split_block emits note if *check == BB_END.  Probably it 
+     is better to rip that note off.  */
+
+  return e->dest;
+}
+
+/* Create empty basic block after the specified block.  */
+basic_block (* sched_create_empty_bb) (basic_block);
+
+/* A generic version of sched_create_empty_bb ().  */
+basic_block
+sched_create_empty_bb_1 (basic_block after)
+{
+  return create_empty_bb (after);
+}
+
 #endif /* INSN_SCHEDULING */
--- gcc-local/sel-sched-dev/gcc/sel-sched.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sel-sched.c	(revision 28697)
@@ -171,18 +171,15 @@ static VEC(rhs_t, heap) *vec_av_set = NU
 static int sel_sched_region_run = 0;
 
 
-basic_block (*old_create_basic_block) (void *, void *, basic_block);
-static void (*old_delete_basic_block) (basic_block);
-
 /* Forward declarations of static functions.  */
 static bool rtx_search (rtx, rtx);
 static int sel_rank_for_schedule (const void *, const void *);
 static bool equal_after_moveup_path_p (rhs_t, ilist_t, rhs_t);
 static regset compute_live (insn_t);
-static basic_block generate_bookkeeping_insn (rhs_t, insn_t, edge, edge);
+static void generate_bookkeeping_insn (rhs_t, insn_t, edge, edge);
 static bool find_used_regs (insn_t, av_set_t, regset, HARD_REG_SET *, 
                             def_list_t *);
-static bool move_op (insn_t, av_set_t, ilist_t, edge, edge, rhs_t);
+static bool move_op (insn_t, av_set_t, ilist_t, edge, edge, expr_t);
 static void sel_sched_region_1 (void);
 static void sel_sched_region_2 (sel_sched_region_2_data_t);
 
@@ -530,7 +527,7 @@ create_insn_rtx_with_rhs (vinsn_t vi, rt
   lhs_rtx = copy_rtx (VINSN_LHS (vi));
 
   pattern = gen_rtx_SET (VOIDmode, lhs_rtx, rhs_rtx);
-  insn_rtx = create_insn_rtx_from_pattern (pattern);
+  insn_rtx = create_insn_rtx_from_pattern (pattern, NULL_RTX);
 
   return insn_rtx;
 }
@@ -559,7 +556,7 @@ create_insn_rtx_with_rhs (vinsn_t vi, rt
 static bool
 replace_src_with_reg_ok_p (insn_t insn, rtx new_src_reg)
 {
-  vinsn_t vi = INSN_VI (insn);
+  vinsn_t vi = INSN_VINSN (insn);
   enum machine_mode mode;
   rtx dst_loc;
   bool res;
@@ -585,7 +582,7 @@ replace_src_with_reg_ok_p (insn_t insn, 
 static bool
 replace_dest_with_reg_ok_p (insn_t insn, rtx new_reg)
 {
-  vinsn_t vi = INSN_VI (insn);
+  vinsn_t vi = INSN_VINSN (insn);
   bool res;
 
   /* We should deal here only with separable insns.  */
@@ -611,7 +608,7 @@ create_insn_rtx_with_lhs (vinsn_t vi, rt
   rhs_rtx = copy_rtx (VINSN_RHS (vi));
 
   pattern = gen_rtx_SET (VOIDmode, lhs_rtx, rhs_rtx);
-  insn_rtx = create_insn_rtx_from_pattern (pattern);
+  insn_rtx = create_insn_rtx_from_pattern (pattern, NULL_RTX);
 
   return insn_rtx;
 }
@@ -1309,19 +1306,80 @@ can_overcome_dep_p (ds_t ds)
   return true;
 }
 
-/* Get a speculation check instruction from the target.  SPEC_EXPR is a
-   speculative expression.  */
-static rtx
-create_speculation_check_insn_rtx (rtx spec_insn_rtx, ds_t check_ds)
+static bool speculate_expr (expr_t, ds_t);
+
+/* Get a speculation check instruction.
+   C_RHS is a speculative expression,
+   CHECK_DS describes speculations that should be checked,
+   ORIG_INSN is the original non-speculative insn in the stream.  */
+static insn_t
+create_speculation_check (expr_t c_rhs, ds_t check_ds, insn_t orig_insn)
 {
   rtx check_pattern;
+  rtx insn_rtx;
+  insn_t insn;
+  basic_block recovery_block;
+  rtx label;
+
+  sel_dump_cfg ("before-gen-spec-check");
+
+  /* Create a recovery block if target is going to emit branchy check.  */
+  if (targetm.sched.needs_block_p (check_ds))
+    {
+      recovery_block = sel_create_recovery_block (orig_insn);
+      label = BB_HEAD (recovery_block);
+    }
+  else
+    {
+      recovery_block = NULL;
+      label = NULL_RTX;
+    }
 
-  check_pattern = targetm.sched.gen_spec_check (spec_insn_rtx, NULL_RTX,
+  /* Get pattern of the check.  */
+  check_pattern = targetm.sched.gen_spec_check (EXPR_INSN_RTX (c_rhs), label,
 						check_ds);
 
   gcc_assert (check_pattern != NULL);
 
-  return create_insn_rtx_from_pattern (check_pattern);
+  /* Emit check.  */
+  insn_rtx = create_insn_rtx_from_pattern (check_pattern, label);
+
+  insn = sel_gen_insn_from_rtx_after (insn_rtx, INSN_EXPR (orig_insn),
+				      INSN_SEQNO (orig_insn), orig_insn);
+
+  /* Make check to be non-speculative.  */
+  EXPR_SPEC_DONE_DS (INSN_EXPR (insn)) &= ~check_ds;
+  INSN_SPEC_CHECKED_DS (insn) = check_ds;
+
+  if (recovery_block != NULL)
+    /* Emit copy of original insn (though with replaced target register,
+       if needed) to the recovery block.  */
+    {
+      rtx twin_rtx;
+      insn_t twin;
+
+      twin_rtx = copy_rtx (PATTERN (EXPR_INSN_RTX (c_rhs)));
+      twin_rtx = create_insn_rtx_from_pattern (twin_rtx, NULL_RTX);
+      twin = sel_gen_recovery_insn_from_rtx_after (twin_rtx, INSN_EXPR (insn),
+						   INSN_SEQNO (insn),
+						   bb_note (recovery_block));
+    }
+
+  /* If we've generated a data speculation check, make sure
+     that all the bookkeeping instruction we'll create during
+     this move_op () will allocate an ALAT entry so that the
+     check won't fail.
+     In case of control speculation we must convert C_RHS to control
+     speculative mode, because failing to do so will bring us an exception
+     thrown by the non-control-speculative load.  */
+  {
+    check_ds = ds_get_max_dep_weak (check_ds);
+    speculate_expr (c_rhs, check_ds);
+  }
+
+  sel_dump_cfg ("after-gen-spec-check");
+
+  return insn;
 }
 
 /* Try to transform EXPR to data speculative version.  Return true on
@@ -1346,7 +1404,7 @@ apply_spec_to_expr (expr_t expr, ds_t ds
 
     case 1:
       {
-	rtx spec_insn_rtx = create_insn_rtx_from_pattern (spec_pat);
+	rtx spec_insn_rtx = create_insn_rtx_from_pattern (spec_pat, NULL_RTX);
 	vinsn_t spec_vinsn = create_vinsn_from_insn_rtx (spec_insn_rtx);
 
 	change_vinsn_in_expr (expr, spec_vinsn);
@@ -1402,10 +1460,10 @@ has_spec_dependence_p (expr_t expr, insn
   return 0;
 }
 
-/* Add to AVP those exprs that might have been transformed to their speculative
-   versions when moved through INSN.  */
+/* Record speculations that EXPR should perform in order to be moved through
+   INSN.  */
 static void
-un_speculate (expr_t expr, insn_t insn, av_set_t *new_set_ptr)
+un_speculate (expr_t expr, insn_t insn)
 {
   ds_t expr_spec_done_ds;
   ds_t full_ds;
@@ -1418,30 +1476,12 @@ un_speculate (expr_t expr, insn_t insn, 
     return;
 
   full_ds = has_spec_dependence_p (expr, insn);
+
   if (full_ds == 0)
     return;
-  
-  {
-    expr_def _new_expr, *new_expr = &_new_expr;
-    
-    copy_expr (new_expr, expr);
-    
-    {
-      bool b;
-      
-      full_ds = ds_get_speculation_types (full_ds);
-      expr_spec_done_ds &= ~full_ds;
-      
-      b = apply_spec_to_expr (new_expr, expr_spec_done_ds);
-      gcc_assert (b);
-      
-      EXPR_SPEC_TO_CHECK_DS (new_expr) |= full_ds;
-    }
-    
-    av_set_add (new_set_ptr, new_expr);
-    
-    clear_expr (new_expr);
-  }
+
+  full_ds = ds_get_speculation_types (full_ds);
+  EXPR_SPEC_TO_CHECK_DS (expr) |= full_ds;
 }
 
 
@@ -1474,7 +1514,7 @@ undo_transformations (av_set_t *av_ptr, 
 {
   av_set_iterator av_iter;
   rhs_t rhs;
-  av_set_t new_set = NULL;
+  av_set_t new_set;
 
   /* First, kill any RHS that uses registers set by an insn.  This is 
      required for correctness.  */
@@ -1497,10 +1537,10 @@ undo_transformations (av_set_t *av_ptr, 
   FOR_EACH_RHS (rhs, av_iter, *av_ptr)
     {
       if (1 || bitmap_bit_p (EXPR_CHANGED_ON_INSNS (rhs), INSN_LUID (insn)))
-        un_speculate (rhs, insn, &new_set);
+        un_speculate (rhs, insn);
     }
-  
-  av_set_union_and_clear (av_ptr, &new_set);
+
+  new_set = NULL;
 
   FOR_EACH_RHS (rhs, av_iter, *av_ptr)
     {
@@ -1925,29 +1965,34 @@ compute_av_set (insn_t insn, ilist_t p, 
     }
 
   /* If insn already has valid av(insn) computed, just return it.  */ 
-  if (INSN_AV_VALID_P (insn))
+  if (AV_SET_VALID_P (insn))
     {
+      av_set_t av_set;
+
+      if (sel_bb_head_p (insn))
+	av_set = BB_AV_SET (BLOCK_FOR_INSN (insn));
+      else
+	av_set = NULL;
+
       line_start ();
       print ("found valid av (%d): ", INSN_UID (insn));
-      dump_av_set (AV_SET (insn));
+      dump_av_set (av_set);
       line_finish ();
       block_finish ();
 
-      return unique_p ? av_set_copy (AV_SET (insn)) : AV_SET (insn);
+      return unique_p ? av_set_copy (av_set) : av_set;
     }
 
   /* If the window size exceeds at insn during the first computation of 
      av(group), leave a window boundary mark at insn, so further 
      update_data_sets calls do not compute past insn.  */
-  if (ws > MAX_WS)
+  if (ws > MAX_WS && !sel_bb_head_p (insn))
     {
       print ("Max software lookahead window size reached");
       
       /* We can reach max lookahead size at bb_header, so clean av_set 
 	 first.  */
-      av_set_clear (&AV_SET (insn));
-
-      AV_LEVEL (insn) = global_level;
+      INSN_WS_LEVEL (insn) = global_level;
       block_finish ();
       return NULL;
     }
@@ -2041,11 +2086,10 @@ compute_av_set (insn_t insn, ilist_t p, 
   if (!INSN_NOP_P (insn))
     {
       expr_t expr;
-      vinsn_t vi = INSN_VI (insn);
 
       moveup_set_rhs (&av1, insn, false);
       
-      expr = av_set_lookup (av1, vi);
+      expr = av_set_lookup (av1, INSN_VINSN (insn));
 
       if (expr != NULL)
 	/* ??? It is not clear if we should replace or merge exprs in this
@@ -2062,6 +2106,20 @@ compute_av_set (insn_t insn, ilist_t p, 
 	av_set_add (&av1, INSN_EXPR (insn));
     }
 
+  /* If insn is a bb_header, leave a copy of av1 here.  */
+  if (sel_bb_head_p (insn))
+    {
+      basic_block bb = BLOCK_FOR_INSN (insn);
+
+      /* Clear stale bb_av_set.  */
+      av_set_clear (&BB_AV_SET (bb));
+
+      print ("Save av(%d) in bb header", INSN_UID (insn));
+
+      BB_AV_SET (bb) = unique_p ? av_set_copy (av1) : av1;
+      BB_AV_LEVEL (bb) = global_level;
+    }
+
   line_start ();
   print ("insn: ");
   dump_insn_1 (insn, 1);
@@ -2071,18 +2129,6 @@ compute_av_set (insn_t insn, ilist_t p, 
   print ("av (%d): ", INSN_UID (insn));
   dump_av_set (av1);
   line_finish ();
-
-  /* INSN might have been a bb_header, so free its AV_SET in any case.  */
-  av_set_clear (&AV_SET (insn));
-
-  /* If insn is a bb_header, leave a copy of av1 here.  */
-  if (sel_bb_header_p (insn))
-    {
-      print ("Save av(%d) in bb header", INSN_UID (insn));
-
-      AV_SET (insn) = unique_p ? av_set_copy (av1) : av1;
-      AV_LEVEL (insn) = global_level;
-    }
   
   block_finish ();
   return av1;
@@ -2132,22 +2178,33 @@ compute_live_after_bb (basic_block bb)
 static regset
 compute_live (insn_t insn)
 {
-  if (LV_SET_VALID_P (insn) && !ignore_first)
+  if (sel_bb_head_p (insn) && !ignore_first)
     {
-      regset lv = get_regset_from_pool ();
+      basic_block bb = BLOCK_FOR_INSN (insn);
 
-      COPY_REG_SET (lv, LV_SET (insn));
-      return_regset_to_pool (lv);
-      return lv;
+      if (BB_LV_SET_VALID_P (bb))
+	{
+	  regset lv = get_regset_from_pool ();
+
+	  COPY_REG_SET (lv, BB_LV_SET (bb));
+	  return_regset_to_pool (lv);
+	  return lv;
+	}
     }
 
   /* We've skipped the wrong lv_set.  Don't skip the right one.  */
   ignore_first = false;
   
   {
-    basic_block bb = BLOCK_FOR_INSN (insn);
-    insn_t bb_end = BB_END (bb);
-    regset lv = compute_live_after_bb (bb);
+    basic_block bb;
+    insn_t bb_end;
+    regset lv;
+
+    bb = BLOCK_FOR_INSN (insn);
+    gcc_assert (in_current_region_p (bb));
+
+    bb_end = BB_END (bb);
+    lv = compute_live_after_bb (bb);
 
     while (bb_end != insn)
       {
@@ -2160,14 +2217,13 @@ compute_live (insn_t insn)
     /* Compute live set above INSN.  */
     propagate_lv_set (lv, insn);
 
-    if (sel_bb_header_p (insn))
+    if (sel_bb_head_p (insn))
       {
-	gcc_assert (LV_SET (insn) != NULL);
+	basic_block bb = BLOCK_FOR_INSN (insn);
 
-	COPY_REG_SET (LV_SET (insn), lv);
+	COPY_REG_SET (BB_LV_SET (bb), lv);
+	BB_LV_SET_VALID_P (bb) = true;
       }
-    else
-      gcc_assert (LV_SET (insn) == NULL);
 
     /* We return LV to the pool, but will not clear it there.  Thus we can
        legimatelly use LV till the next use of regset_pool_get ().  */
@@ -2193,37 +2249,20 @@ compute_live_below_insn (insn_t insn, re
 static void
 update_data_sets (rtx insn)
 {
-  gcc_assert (LV_SET (insn) != NULL
-	      && INSN_AV_VALID_P (insn)
-	      && sel_bb_header_p (insn));
-
-  /* Recompute the first LV_SET as it may have got invalid.  */
-  ignore_first = true;
-  compute_live (insn);
+  gcc_assert (sel_bb_head_p (insn) && AV_LEVEL (insn) != 0);
 
   line_start ();
   print ("update_data_sets");
   dump_insn (insn);
   line_finish ();
 
-  block_start ();
-
-  if (LV_SET_VALID_P (insn))
-    {
-      line_start ();
-      print ("live regs set:");
-      dump_lv_set (LV_SET (insn));
-      line_finish ();
-    }
-
-  block_finish ();
+  /* Recompute the INSN's LV_SET as it may have got invalid.  */
+  ignore_first = true;
+  compute_live (insn);
 
-  /* Invalidate AV_SET.  */
-  AV_LEVEL (insn) = 0;
-  if (sel_bb_header_p (insn))
-    compute_av_set (insn, NULL, 0, 0);
-  /* If INSN is not a bb_header any longer, its av_set will be
-     deleted on the next compute_av_set ().  */
+  /* Recompute the INSN's AV_SET as it may have got invalid.  */
+  BB_AV_LEVEL (BLOCK_FOR_INSN (insn)) = -1;
+  compute_av_set (insn, NULL, 0, 0);
 }
 
 
@@ -2295,8 +2334,8 @@ get_spec_check_type_for_insn (insn_t ins
 
 static int
 find_used_regs_1 (insn_t insn, av_set_t orig_ops, ilist_t path, 
-		regset used_regs, HARD_REG_SET *unavailable_hard_regs,
-		bool crosses_call, def_list_t *original_insns)
+		  regset used_regs, HARD_REG_SET *unavailable_hard_regs,
+		  bool crosses_call, def_list_t *original_insns)
 {
   rhs_t rhs;
   bool is_orig_op = false;
@@ -2325,7 +2364,7 @@ find_used_regs_1 (insn_t insn, av_set_t 
   orig_ops = av_set_copy (orig_ops);
 
   /* If we've found valid av set, then filter the orig_ops set.  */
-  if (INSN_AV_VALID_P (insn))
+  if (AV_SET_VALID_P (insn))
     {
       line_start ();
       print ("av");
@@ -2365,7 +2404,7 @@ find_used_regs_1 (insn_t insn, av_set_t 
      When traversing the DAG below this insn is finished, insert bookkeeping 
      code, if the insn is a joint point, and remove leftovers.  */
 
-  rhs = av_set_lookup (orig_ops, INSN_VI (insn));
+  rhs = av_set_lookup (orig_ops, INSN_VINSN (insn));
   if (rhs)
     {
       /* We have found the original operation. Mark the registers that do not
@@ -2408,7 +2447,7 @@ find_used_regs_1 (insn_t insn, av_set_t 
 	      REG_DEAD: dx    
 	 */
       /* FIXME: see comment above and enable MEM_P in vinsn_separable_p.  */
-      gcc_assert (!VINSN_SEPARABLE_P (INSN_VI (insn))
+      gcc_assert (!VINSN_SEPARABLE_P (INSN_VINSN (insn))
                   || !MEM_P (INSN_LHS (insn)));
     }
   else
@@ -2548,7 +2587,7 @@ find_used_regs_1 (insn_t insn, av_set_t 
 
   av_set_clear (&orig_ops);
 
-  gcc_assert (!sel_bb_header_p (insn) || INSN_AV_VALID_P (insn)
+  gcc_assert (!sel_bb_head_p (insn) || AV_SET_VALID_P (insn)
 	      || AV_LEVEL (insn) == -1);
 
   if (res == -1 && AV_LEVEL (insn) == -1)
@@ -2597,7 +2636,7 @@ find_used_regs (insn_t insn, av_set_t or
      unavailable_hard_regs.  */
   FOR_EACH_DEF (def, i, *original_insns)
     {
-      vinsn_t vinsn = INSN_VI (def->orig_insn);
+      vinsn_t vinsn = INSN_VINSN (def->orig_insn);
 
       if (VINSN_SEPARABLE_P (vinsn))
 	mark_unavailable_hard_regs (def, unavailable_hard_regs, used_regs);
@@ -2859,23 +2898,6 @@ fill_ready_list (av_set_t *av_ptr, bnd_t
 
       dump_rhs (rhs);
 
-      /* Don't allow insns from a SCHED_GROUP to be scheduled if their 
-	 ancestors havn't been scheduled.
-	 !!! This should be dealt with in moveup_rhs ().  */
-      if (VINSN_UNIQUE_P (vi) && SCHED_GROUP_P (insn)
-	  && !sel_bb_header_p (insn))
-        {
-          insn_t prev = PREV_INSN (insn);
-          
-          if (SCHED_GROUP_P (prev) 
-              && INSN_SCHED_CYCLE (prev) <= INSN_SCHED_CYCLE (insn))
-	    {
-	      /* Dealt in moveup_rhs ().  */
-	      gcc_unreachable ();
-	      continue;
-	    }
-        }
-      
       /* Don't allow any insns other than from SCHED_GROUP if we have one.  */
       if (FENCE_SCHED_NEXT (fence) && insn != FENCE_SCHED_NEXT (fence))
           continue;
@@ -3204,8 +3226,8 @@ find_best_rhs_and_reg_that_fits (av_set_
          memcpy (FENCE_STATE (fence), curr_state, dfa_state_size);
        }
       else if (GET_CODE (PATTERN (best)) != USE
-              && GET_CODE (PATTERN (best)) != CLOBBER)
-       can_issue_more--;
+	       && GET_CODE (PATTERN (best)) != CLOBBER)
+	can_issue_more--;
     }
 
   *best_rhs_vliw = res;
@@ -3218,19 +3240,13 @@ find_best_rhs_and_reg_that_fits (av_set_
 static insn_t
 gen_insn_from_expr_after (expr_t expr, int seqno, insn_t place_to_insert)
 {
-  {
-    insn_t insn = RHS_INSN (expr);
-
-    /* This assert fails when we have identical instructions
-       one of which dominates the other.  In this case move_op ()
-       finds the first instruction and doesn't search for second one.
-       The solution would be to compute av_set after the first found
-       insn and, if insn present in that set, continue searching.
-       For now we workaround this issue in move_op.  */
-    gcc_assert (!INSN_IN_STREAM_P (insn));
-
-    gcc_assert (!LV_SET_VALID_P (insn));
-  }
+  /* This assert fails when we have identical instructions
+     one of which dominates the other.  In this case move_op ()
+     finds the first instruction and doesn't search for second one.
+     The solution would be to compute av_set after the first found
+     insn and, if insn present in that set, continue searching.
+     For now we workaround this issue in move_op.  */
+  gcc_assert (!INSN_IN_STREAM_P (EXPR_INSN_RTX (expr)));
 
   {
     rtx reg = expr_dest_reg (expr);
@@ -3269,11 +3285,11 @@ gen_insn_from_expr_after (expr_t expr, i
    the upper bb, redirecting all other paths to the lower bb and returns the
    newly created bb, which is the lower bb. 
    All scheduler data is initialized for the newly created insn.  */
-static basic_block
+static void
 generate_bookkeeping_insn (rhs_t c_rhs, insn_t join_point, edge e1, edge e2)
 {
   basic_block src, bb = e2->dest;
-  basic_block new_bb, res = NULL;
+  basic_block new_bb;
   insn_t src_end = NULL_RTX;
   insn_t place_to_insert;
   /* Save the original destination of E1.  */
@@ -3282,7 +3298,7 @@ generate_bookkeeping_insn (rhs_t c_rhs, 
   print ("generate_bookkeeping_insn(%d->%d)", e1->src->index, 
 	 e2->dest->index);
 
-  /* sel_split_block () can emit an unnecessary note if the following isn't
+  /* sched_split_block () can emit an unnecessary note if the following isn't
      true.  */
   gcc_assert (bb_note (bb) != BB_END (bb));
 
@@ -3329,7 +3345,7 @@ generate_bookkeeping_insn (rhs_t c_rhs, 
         }
 
       /* Split the head of the BB to insert BOOK_INSN there.  */
-      new_bb = sel_split_block (bb, NULL);
+      new_bb = sched_split_block (bb, NULL);
   
       /* Move note_list from the upper bb.  */
       gcc_assert (BB_NOTE_LIST (new_bb) == NULL_RTX);
@@ -3343,7 +3359,7 @@ generate_bookkeeping_insn (rhs_t c_rhs, 
   
       /* Make a jump skipping bookkeeping copy.  */
       if (e1->flags & EDGE_FALLTHRU)
-        res = sel_redirect_edge_force (e1, new_bb);
+        sel_redirect_edge_and_branch_force (e1, new_bb);
       else
         sel_redirect_edge_and_branch (e1, new_bb);
 
@@ -3387,25 +3403,9 @@ generate_bookkeeping_insn (rhs_t c_rhs, 
     clear_expr (new_expr);
 
     gcc_assert ((src == NULL && BB_END (bb) == new_insn
-		 && sel_bb_header_p (new_insn))
+		 && sel_bb_head_p (new_insn))
 		|| BB_END (src) == new_insn);
-
-    gcc_assert (AV_SET (new_insn) == NULL && AV_LEVEL (new_insn) == 0);
-
-    /* Set AV_LEVEL to special value to bypass assert in move_op ().  */
-    AV_LEVEL (new_insn) = -1;
-
-    gcc_assert (LV_SET (join_point) != NULL);
-
-    if (sel_bb_header_p (new_insn))
-      {
-	LV_SET (new_insn) = get_regset_from_pool ();
-	ignore_first = true;
-	compute_live (new_insn);
-      }
   }
-
-  return res;
 }
 
 static int fill_insns_run = 0;
@@ -3584,7 +3584,6 @@ fill_insns (fence_t fence, int seqno, il
 	  /* Move chosen insn.  */
 	  {
 	    insn_t place_to_insert;
-	    insn_t new_bb_head = NULL_RTX;
 	    expr_def _c_rhs, *c_rhs = &_c_rhs;
 	    bool b;
 
@@ -3603,27 +3602,6 @@ fill_insns (fence_t fence, int seqno, il
 		 basic block, where INSN will be added.  */
 	      place_to_insert = PREV_INSN (BND_TO (bnd));
 
-	    sel_dump_cfg ("before-move_op");
-
-	    /* Marker is useful to bind .dot dumps and the log.  */
-	    print_marker_to_log ();
-
-	    /* Make a move.  This call will remove the original operation,
-	       insert all necessary bookkeeping instructions and update the
-	       data sets.  After that all we have to do is add the operation
-	       at before BND_TO (BND).  */
-	    b = move_op (BND_TO (bnd), rhs_seq, NULL, NULL, NULL, c_rhs);
-
-	    /* We should be able to find the expression we've chosen for 
-	       scheduling.  */
-	    gcc_assert (b);
-
-            /* We want to use a pattern from rhs_vliw, because it could've 
-               been substituted, and the rest of data from rhs_seq.  */
-            if (! rtx_equal_p (EXPR_PATTERN (rhs_vliw), 
-                               EXPR_PATTERN (c_rhs)))
-              change_vinsn_in_expr (c_rhs, EXPR_VINSN (rhs_vliw));
-
 	    /* Find a place for C_RHS to schedule.
 	       We want to have an invariant that only insns that are
 	       sel_bb_header_p () have a valid LV_SET.  But, in the same time,
@@ -3649,7 +3627,6 @@ fill_insns (fence_t fence, int seqno, il
 	      insn_t prev_insn = PREV_INSN (place_to_insert);
 	      basic_block bb = BLOCK_FOR_INSN (place_to_insert);
 	      basic_block prev_bb = bb->prev_bb;
-	      basic_block next_bb;
 
 	      if (!NOTE_INSN_BASIC_BLOCK_P (place_to_insert)
 		  || prev_insn == NULL_RTX
@@ -3660,44 +3637,76 @@ fill_insns (fence_t fence, int seqno, il
 		  || !in_current_region_p (prev_bb)
 		  || control_flow_insn_p (prev_insn))
 		{
-		  prev_bb = bb;
+		  /* Generate a nop that will help us to avoid removing
+		     data sets we need.  */
+		  place_to_insert = NEXT_INSN (place_to_insert);
+		  gcc_assert (BLOCK_FOR_INSN (place_to_insert) == bb);
+		  place_to_insert = get_nop_from_pool (place_to_insert);
 
-		  /* Save new_bb_head to update lv_set on.  */
-		  if (!NOTE_INSN_BASIC_BLOCK_P (place_to_insert)
-		      && !sel_bb_end_p (place_to_insert))
-		    new_bb_head = NEXT_INSN (place_to_insert);
+		  prev_bb = bb;
 
 		  /* Split block to generate a new floating bb header.  */
-		  next_bb = sel_split_block (bb, place_to_insert);
+		  bb = sched_split_block (bb, place_to_insert);
 		}
 	      else
 		{
-		  gcc_assert (single_succ (prev_bb) == bb);
+		  if (NOTE_INSN_BASIC_BLOCK_P (place_to_insert))
+		    {
+		      place_to_insert = NEXT_INSN (place_to_insert);
+		      gcc_assert (BLOCK_FOR_INSN (place_to_insert) == bb);
+		    }
 
-		  place_to_insert = prev_insn;
-		  next_bb = bb;
+		  /* Generate a nop that will help us to avoid removing
+		     data sets we need.  */
+		  place_to_insert = get_nop_from_pool (place_to_insert);
+
+		  /* Move the nop to the previous block.  */
+		  {
+		    insn_t prev_insn = sel_bb_end (prev_bb);
+		    insn_t note = bb_note (bb);
+		    insn_t nop_insn = sel_bb_head (bb);
+		    insn_t next_insn = NEXT_INSN (nop_insn);
+
+		    gcc_assert (prev_insn != NULL_RTX
+				&& nop_insn == place_to_insert
+				&& PREV_INSN (note) == prev_insn);
+
+		    NEXT_INSN (prev_insn) = nop_insn;
+		    PREV_INSN (nop_insn) = prev_insn;
+
+		    PREV_INSN (note) = nop_insn;
+		    NEXT_INSN (note) = next_insn;
+
+		    NEXT_INSN (nop_insn) = note;
+		    PREV_INSN (next_insn) = note;
+
+		    BB_END (prev_bb) = nop_insn;
+		    BLOCK_FOR_INSN (nop_insn) = prev_bb;
+		  }
 		}
 
-	      if (sel_bb_empty_p (next_bb))
-		sel_merge_blocks (prev_bb, next_bb);
-
-	      gcc_assert (BLOCK_FOR_INSN (place_to_insert) == prev_bb);
+	      gcc_assert (single_succ (prev_bb) == bb);
 
-	      /* Now do some cleanup: remove empty basic blocks after
-		 BB.  */
+	      sel_dump_cfg ("before-move_op");
 
-	      next_bb = prev_bb->next_bb;
+	      /* Marker is useful to bind .dot dumps and the log.  */
+	      print_marker_to_log ();
 
-	      /* !!! Can't use bb_empty_p here because it returns true on
-		 empty blocks with labels.  */
-	      while (BB_HEAD (next_bb) == BB_END (next_bb)
-		     && in_current_region_p (next_bb))
-		{
-		  bb = next_bb->next_bb;
-
-		  sel_remove_empty_bb (next_bb, true, true);
-		  next_bb = bb;
-		}
+	      /* Make a move.  This call will remove the original operation,
+		 insert all necessary bookkeeping instructions and update the
+		 data sets.  After that all we have to do is add the operation
+		 at before BND_TO (BND).  */
+	      b = move_op (BND_TO (bnd), rhs_seq, NULL, NULL, NULL, c_rhs);
+
+	      /* We should be able to find the expression we've chosen for 
+		 scheduling.  */
+	      gcc_assert (b);
+
+	      /* We want to use a pattern from rhs_vliw, because it could've 
+		 been substituted, and the rest of data from rhs_seq.  */
+	      if (! rtx_equal_p (EXPR_PATTERN (rhs_vliw), 
+				 EXPR_PATTERN (c_rhs)))
+		change_vinsn_in_expr (c_rhs, EXPR_VINSN (rhs_vliw));
 	    }
 
 	    /* Add the instruction.  */
@@ -3706,25 +3715,10 @@ fill_insns (fence_t fence, int seqno, il
 
 	    ++INSN_SCHED_TIMES (insn);
 
-	    if (NOTE_INSN_BASIC_BLOCK_P (place_to_insert))
-	      {
-		gcc_assert (new_bb_head == NULL_RTX);
-		new_bb_head = insn;
-	      }
-
-            /* Initialize LV_SET of the bb header.  */
-	    if (new_bb_head != NULL_RTX)
-	      {
-		/* !!! TODO: We should replace all occurencies of
-		   LV_SET_VALID_P () with LV_SET () != NULL.  Overwise it is
-		   not clear what a valid and invalid lv set is.  */
-
-		if (LV_SET (new_bb_head) == NULL)
-		  LV_SET (new_bb_head) = get_clear_regset_from_pool ();
-
-		ignore_first = true;
-		compute_live (new_bb_head);
-	      }
+	    if (INSN_NOP_P (place_to_insert))
+	      /* Return the nop generated for preserving of data sets back
+		 into pool.  */
+	      return_nop_to_pool (place_to_insert);
 	  }
 
 	  av_set_clear (&rhs_seq);
@@ -3919,9 +3913,9 @@ move_op (insn_t insn, av_set_t orig_ops,
 	 rhs_t c_rhs)
 {
   rhs_t rhs;
-  basic_block bb;
   bool c_rhs_inited_p;
   rtx reg;
+  bool generated_nop_p = false;
   
   line_start ();
   print ("move_op(");
@@ -3943,7 +3937,7 @@ move_op (insn_t insn, av_set_t orig_ops,
   orig_ops = av_set_copy (orig_ops);
 
   /* If we've found valid av set, then filter the orig_ops set.  */
-  if (INSN_AV_VALID_P (insn))
+  if (AV_SET_VALID_P (insn))
     {
       line_start ();
       print ("av");
@@ -3982,12 +3976,14 @@ move_op (insn_t insn, av_set_t orig_ops,
      When traversing the DAG below this insn is finished, insert bookkeeping 
      code, if the insn is a joint point, and remove leftovers.  */
 
-  rhs = av_set_lookup (orig_ops, INSN_VI (insn));
+  rhs = av_set_lookup (orig_ops, INSN_VINSN (insn));
 
   if (rhs != NULL)
     /* We have found the original operation.  Replace it by REG, if 
        it is scheduled as RHS, or just remove it later, if it's an insn.  */
     {
+      print ("found original operation!");
+
       copy_expr_onside (c_rhs, INSN_EXPR (insn));
       c_rhs_inited_p = true;
 
@@ -4009,53 +4005,15 @@ move_op (insn_t insn, av_set_t orig_ops,
 	     always false.  */
 	  gcc_unreachable ();
 
-	  change_vinsn_in_expr (c_rhs, INSN_VI (insn));
+	  change_vinsn_in_expr (c_rhs, INSN_VINSN (insn));
 	}
       
       /* For instructions we must immediately remove insn from the
 	 stream, so subsequent update_data_sets () won't include this
 	 insn into av_set.
 	 For rhs we must make insn look like "INSN_REG (insn) := c_rhs".  */
-
-      print ("found original operation!");
-
       {
-	insn_t finish_insn = insn;
-
-	{
-	  ds_t check_ds = get_spec_check_type_for_insn (insn, rhs);
-
-	  if (check_ds != 0)
-	    /* A speculation check should be inserted.  */
-	    {
-	      rtx x;
-
-	      x = create_speculation_check_insn_rtx (EXPR_INSN_RTX (rhs),
-						     check_ds);
-	      x = sel_gen_insn_from_rtx_after (x,
-					       INSN_EXPR (finish_insn),
-					       INSN_SEQNO (finish_insn),
-					       finish_insn);
-
-	      EXPR_SPEC_DONE_DS (INSN_EXPR (x)) &= ~check_ds;
-
-	      finish_insn = x;
-
-	      /* If we've generated a data speculation check, make sure
-		 that all the bookkeeping instruction we'll create during
-		 this move_op () will allocate an ALAT entry so that the
-		 check won't fail.
-		 ??? We should ask target if somethings needs to be done
-		 here.  */
-	      /* Commented out due to PR23.  */
-	      /*speculate_expr (c_rhs, ds_get_max_dep_weak (check_ds));*/
-	    }
-	  else
-	    EXPR_SPEC_DONE_DS (INSN_EXPR (finish_insn)) = 0;
-
-	  gcc_assert (EXPR_SPEC_DONE_DS (INSN_EXPR (finish_insn)) == 0
-		      && EXPR_SPEC_TO_CHECK_DS (INSN_EXPR (finish_insn)) == 0);
-	}
+	bool recovery_p = false;
 
 	{
 	  rtx cur_reg = expr_dest_reg (c_rhs);
@@ -4067,37 +4025,63 @@ move_op (insn_t insn, av_set_t orig_ops,
 	     operation's right hand side with the register chosen.  */
 	  if (reg != NULL_RTX && REGNO (reg) != REGNO (cur_reg))
 	    {
-	      rtx insn_rtx;
-	      insn_t x;
+	      rtx reg_move_insn_rtx;
+	      insn_t reg_move_insn;
+
+	      reg_move_insn_rtx = create_insn_rtx_with_rhs (INSN_VINSN (insn),
+							    reg);
+	      reg_move_insn = sel_gen_insn_from_rtx_after (reg_move_insn_rtx,
+							   INSN_EXPR (insn),
+							   INSN_SEQNO (insn),
+							   insn);
+	      EXPR_SPEC_DONE_DS (INSN_EXPR (reg_move_insn)) = 0;
 
 	      replace_dest_with_reg_in_rhs (c_rhs, reg);
 
-	      insn_rtx = create_insn_rtx_with_rhs (INSN_VI (insn), reg);
-	      x = sel_gen_insn_from_rtx_after (insn_rtx,
-					       INSN_EXPR (finish_insn),
-					       INSN_SEQNO (finish_insn),
-					       finish_insn);
+	      recovery_p = true;
+	    }
+	}
+
+	{
+	  insn_t x;
+	  ds_t check_ds = get_spec_check_type_for_insn (insn, rhs);
+
+	  if (check_ds != 0)
+	    {
+	      /* A speculation check should be inserted.  */
+	      x = create_speculation_check (c_rhs, check_ds, insn);
 
-	      finish_insn = x;
+	      recovery_p = true;
 	    }
+	  else
+	    {
+	      EXPR_SPEC_DONE_DS (INSN_EXPR (insn)) = 0;
+	      x = insn;
+	    }
+
+	  gcc_assert (EXPR_SPEC_DONE_DS (INSN_EXPR (x)) == 0
+		      && EXPR_SPEC_TO_CHECK_DS (INSN_EXPR (x)) == 0);
 	}
 
+	{
+	  insn_t x;
+
+	  if (!recovery_p)
+	    {
+	      x = get_nop_from_pool (insn);
+
+	      generated_nop_p = true;
+	    }
+	  else
+	    x = NEXT_INSN (insn);
 
-	if (insn == finish_insn)
 	  /* For the insns that don't have rhs just remove insn from the
 	     stream.  Also remove insn if substituting it's right hand 
 	     side would result in operation like reg:=reg.  This kind of
 	     operation is not only excessive, but it may not be supported 
 	     on certain platforms, e.g. "mov si, si" is invalid on i386.  */
-	  finish_insn = get_nop_from_pool (insn);
-
-	{
-	  insn_t new_start_insn = NEXT_INSN (insn);
-
-	  transfer_data_sets (new_start_insn, insn);
-	  sched_sel_remove_insn (insn);
-
-	  insn = new_start_insn;
+	  sel_remove_insn (insn);
+	  insn = x;
 	}
       }
     }
@@ -4199,85 +4183,22 @@ move_op (insn_t insn, av_set_t orig_ops,
 
   /* We should generate bookkeeping code only if we are not at the
      top level of the move_op.  */
-  if (e1 && num_preds_gt_1 (insn))
+  if (e1 && sel_num_cfg_preds_gt_1 (insn))
     {
       /* INSN is a joint point, insert bookkeeping code here.  */
-      bb = generate_bookkeeping_insn (c_rhs, insn, e1, e2);
-      gcc_assert (sel_bb_header_p (insn));
-    }
-  else
-    bb = NULL;
-
-  if (sel_bb_header_p (insn))
-    {
-      if (AV_LEVEL (insn) == -1)
-	/* This will make assert in update_data_sets () happy.  */
-	AV_LEVEL (insn) = global_level;
-      else
-	gcc_assert (INSN_AV_VALID_P (insn));
+      generate_bookkeeping_insn (c_rhs, insn, e1, e2);
+      gcc_assert (sel_bb_head_p (insn));
     }
 
-  if (sel_bb_header_p (insn))
-    {
-      update_data_sets (insn);
-
-      if (bb)
-	{
-	  /* Make assertion in update_data_sets () happy.  */
-	  AV_LEVEL (NEXT_INSN (bb_note (bb))) = global_level;
-
-	  /* We created an extra block in generate_bookkeeping_insn ().
-	     Initialize av_set for it.  */
-	  update_data_sets (NEXT_INSN (bb_note (bb)));
-	}
-    }
+  if (sel_bb_head_p (insn))
+    update_data_sets (insn);
   else
-    {
-      gcc_assert (!LV_SET_VALID_P (insn)
-		  && !INSN_AV_VALID_P (insn));
-    }
+    gcc_assert (AV_LEVEL (insn) == INSN_WS_LEVEL (insn));
 
   /* If INSN was previously marked for deletion, it's time to do it.  */
-  if (INSN_NOP_P (insn))
+  if (generated_nop_p)
     {
-      bool transfered_p = false;
-
-      if (insn == BB_END (BLOCK_FOR_INSN (insn)))
-	{
-	  succ_iterator succ_i;
-	  insn_t succ;
-
-	  FOR_EACH_SUCC (succ, succ_i, insn)
-	    /* NB: We can't assert that SUCC has valid AV_SET because SUCC
-	       can be an ineligible successor of INSN.  */
-	    gcc_assert (LV_SET_VALID_P (succ));
-	}
-
-      if (sel_bb_header_p (insn))
-	{
-	  gcc_assert (LV_SET_VALID_P (insn));
-
-	  if (insn == BB_END (BLOCK_FOR_INSN (insn)))
-	    /* We are about to remove the only insn in the block -
-	       delete its LV_SET.  */
-	    {
-	      return_regset_to_pool (LV_SET (insn));
-	      LV_SET (insn) = NULL;
-	    }
-	  else
-	    {
-	      transfer_data_sets (cfg_succ (insn), insn);
-	      transfered_p = true;
-	    }
-	}
-
-      if (!transfered_p)
-	{
-	  av_set_clear (&AV_SET (insn));
-	  AV_LEVEL (insn) = 0;
-
-	  gcc_assert (!LV_SET_VALID_P (insn));
-	}
+      gcc_assert (INSN_NOP_P (insn));
 
       return_nop_to_pool (insn);
     }
@@ -4436,9 +4357,6 @@ split_edges_incoming_to_rgn (void)
   VEC_free (edge, heap, edges_to_split);
 }
 
-/* Save old RTL hooks here.  */
-static struct rtl_hooks old_rtl_hooks;
-
 /* Init scheduling data for RGN.  Returns true when this region should not 
    be scheduled.  */
 static bool
@@ -4527,15 +4445,8 @@ sel_region_init (int rgn)
   VEC_free (basic_block, heap, bbs);
 
   /* Set hooks so that no newly generated insn will go out unnoticed.  */
-  old_rtl_hooks = rtl_hooks;
-  rtl_hooks = sel_rtl_hooks;
-
-  /* Save create_basic_block () to use in sel_create_basic_block ().  */
-  old_create_basic_block = rtl_cfg_hooks.create_basic_block;
-  rtl_cfg_hooks.create_basic_block = sel_create_basic_block;
-
-  old_delete_basic_block = rtl_cfg_hooks.delete_basic_block;
-  rtl_cfg_hooks.delete_basic_block = rtl_delete_block_not_barriers;
+  sel_register_rtl_hooks ();
+  sel_register_cfg_hooks ();
 
   if (pipelining_p)
     {
@@ -4569,7 +4480,7 @@ sel_region_init (int rgn)
   bitmap_initialize (forced_ebb_heads, 0);
   bitmap_clear (forced_ebb_heads);
 
-  setup_empty_vinsn ();
+  setup_nop_vinsn ();
 
   return false;
 }
@@ -4593,6 +4504,30 @@ sel_region_finish (void)
       VEC_free (rhs_t, heap, vec_av_set);
     }
 
+  /* If LV_SET of the region head should be updated, do it now because
+     there will be no other chance.  */
+  {
+    insn_t *succs;
+    int succs_num;
+    int i;
+
+    cfg_succs_1 (bb_note (EBB_FIRST_BB (0)),
+		 SUCCS_NORMAL | SUCCS_SKIP_TO_LOOP_EXITS,
+		 &succs, &succs_num);
+
+    gcc_assert (flag_sel_sched_pipelining_outer_loops
+		|| succs_num == 1);
+
+    for (i = 0; i < succs_num; i++)
+      {
+	insn_t insn = succs[i];
+	basic_block bb = BLOCK_FOR_INSN (insn);
+
+	if (!BB_LV_SET_VALID_P (bb))
+	  compute_live (insn);
+      }
+  }
+
   /* Emulate the Haifa scheduler for bundling.  */
   if (reload_completed && flag_schedule_emulate_haifa)
     {
@@ -4804,8 +4739,7 @@ sel_region_finish (void)
 	      /* Extend luids so that insns generated by the target will
 		 get zero luid.  */
 	      sched_init_luids (NULL, NULL, NULL, NULL);
-
-	      insn_init.todo = INSN_INIT_TODO_MOVE_LV_SET_IF_BB_HEADER;
+	      insn_init.todo = 0;
 	      sel_init_new_insns ();
 	    }
         }
@@ -4818,7 +4752,7 @@ sel_region_finish (void)
 
   bitmap_clear (forced_ebb_heads);
 
-  free_empty_vinsn ();
+  free_nop_vinsn ();
 
   finish_deps_global ();
   sched_deps_local_finish ();
@@ -4826,10 +4760,8 @@ sel_region_finish (void)
 
   sel_finish_bbs ();
 
-  rtl_cfg_hooks.delete_basic_block = old_delete_basic_block;
-  rtl_cfg_hooks.create_basic_block = old_create_basic_block;
-
-  rtl_hooks = old_rtl_hooks;
+  sel_unregister_cfg_hooks ();
+  sel_unregister_rtl_hooks ();
 
   /* Reset MAX_ISSUE_SIZE.  */
   max_issue_size = 0;
@@ -5033,9 +4965,9 @@ sel_sched_region_1 (void)
       /* When pipelining outer loops, create fences on the loop header,
 	 not preheader.  */
       if (current_loop_nest)
-	init_fences (current_loop_nest->header);
+	init_fences (BB_END (EBB_FIRST_BB (0)));
       else
-	init_fences (EBB_FIRST_BB (0));
+	init_fences (bb_note (EBB_FIRST_BB (0)));
     }
 
   global_level = 1;
@@ -5067,7 +4999,7 @@ sel_sched_region_1 (void)
               for (i = 0; i < current_nr_blocks; i++)
                 {
                   bb = EBB_FIRST_BB (i);
-                  head = sel_bb_header (bb);
+                  head = sel_bb_head (bb);
 
                   /* While pipelining outer loops, skip bundling for loop 
                      preheaders.  Those will be rescheduled in the outer
@@ -5092,7 +5024,7 @@ sel_sched_region_1 (void)
   
                       gcc_assert (fences == NULL);
   
-                      init_fences (bb);
+                      init_fences (bb_note (bb));
   
                       sel_sched_region_2 (data);
   
@@ -5109,7 +5041,7 @@ sel_sched_region_1 (void)
           /* Schedule region pre-header first, if not pipelining 
              outer loops.  */
           bb = EBB_FIRST_BB (0);
-          head = sel_bb_header (bb);
+          head = sel_bb_head (bb);
           
           if (sel_is_loop_preheader_p (bb))          
             /* Don't leave old flags on insns in bb.  */
@@ -5128,7 +5060,7 @@ sel_sched_region_1 (void)
 
               gcc_assert (fences == NULL);
 
-              init_fences (bb);
+              init_fences (bb_note (bb));
 
               sel_sched_region_2 (data);
             }
@@ -5136,6 +5068,10 @@ sel_sched_region_1 (void)
           /* Reschedule pipelined code without pipelining.  */
           loop_entry = EBB_FIRST_BB (1);
 
+	  /* Please note that loop_header (not preheader) might not be in
+	     the current region.  Hence it is possible for loop_entry to have
+	     arbitrary number of predecessors.  */
+#if 0
 	  /* ??? Why don't we assert that EBB_FIRST_BB (1) is an
 	     actual loop entry?  There must be something wrong if we
 	     somehow created an extra block before the loop.  */
@@ -5143,6 +5079,7 @@ sel_sched_region_1 (void)
             loop_entry = loop_entry->next_bb;
 
           gcc_assert (loop_entry && EDGE_COUNT (loop_entry->preds) == 2);
+#endif
 
           for (i = BLOCK_TO_BB (loop_entry->index); i < current_nr_blocks; i++)
             {
@@ -5183,7 +5120,7 @@ sel_sched_region_1 (void)
 
           gcc_assert (fences == NULL);
 
-          init_fences (loop_entry);
+          init_fences (BB_END (EBB_FIRST_BB (0)));
 
           sel_sched_region_2 (data);
         }
@@ -5293,10 +5230,11 @@ sel_global_init (void)
       }
   }
 
-  setup_nop_and_exit_insns ();
-
   sel_extend_insn_rtx_data ();
 
+  setup_nop_and_exit_insns ();
+
+  sel_extend_global_bb_info ();
   init_lv_sets ();
 }
 
@@ -5304,13 +5242,16 @@ sel_global_init (void)
 static void
 sel_global_finish (void)
 {
+  free_bb_note_pool ();
+
   free_lv_sets ();
+  sel_finish_global_bb_info ();
 
-  sel_finish_insn_rtx_data ();
+  free_regset_pool ();
 
   free_nop_and_exit_insns ();
 
-  free_regset_pool ();
+  sel_finish_insn_rtx_data ();
 
   CLEAR_REG_SET (sel_all_regs);
 
--- gcc-local/sel-sched-dev/gcc/sel-sched-ir.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sel-sched-ir.c	(revision 28697)
@@ -59,8 +59,49 @@
 /* A structure used to hold various parameters of insn initialization.  */
 struct _insn_init insn_init;
 
+/* A vector holding bb info for whole scheduling pass.  */
+VEC(sel_global_bb_info_def, heap) *sel_global_bb_info = NULL;
+
 /* A vector holding bb info.  */
-VEC (sel_bb_info_def, heap) *sel_bb_info = NULL;
+VEC(sel_region_bb_info_def, heap) *sel_region_bb_info = NULL;
+
+/* Extend pass-scope data structures for basic blocks.  */
+void
+sel_extend_global_bb_info (void)
+{
+  VEC_safe_grow_cleared (sel_global_bb_info_def, heap, sel_global_bb_info,
+			 last_basic_block);
+}
+
+/* Extend region-scope data structures for basic blocks.  */
+static void
+extend_region_bb_info (void)
+{
+  VEC_safe_grow_cleared (sel_region_bb_info_def, heap, sel_region_bb_info,
+			 last_basic_block);
+}
+
+/* Extend all data structures to fit for all basic blocks.  */
+static void
+extend_bb_info (void)
+{
+  sel_extend_global_bb_info ();
+  extend_region_bb_info ();
+}
+
+/* Finalize pass-scope data structures for basic blocks.  */
+void
+sel_finish_global_bb_info (void)
+{
+  VEC_free (sel_global_bb_info_def, heap, sel_global_bb_info);
+}
+
+/* Finalize region-scope data structures for basic blocks.  */
+static void
+finish_region_bb_info (void)
+{
+  VEC_free (sel_region_bb_info_def, heap, sel_region_bb_info);
+}
 
 /* The loop nest being pipelined.  */
 struct loop *current_loop_nest;
@@ -75,20 +116,6 @@ static sbitmap bbs_in_loop_rgns = NULL;
 /* A vector holding data for each insn rtx.  */
 VEC (sel_insn_rtx_data_def, heap) *s_i_r_d = NULL;
 
-/* This variable is used to ensure that no insns will be emitted by
-   outer-world functions like redirect_edge_and_branch ().  */
-static bool can_add_insns_p = true;
-
-/* The same as the previous flag except that notes are allowed 
-   to be emitted.  
-   FIXME: avoid this dependency between files.  */
-bool can_add_real_insns_p = true;
-
-/* Redefine RTL hooks so we can catch the moment of creating an insn.  */
-static void sel_rtl_insn_added (insn_t);
-#undef RTL_HOOKS_INSN_ADDED
-#define RTL_HOOKS_INSN_ADDED sel_rtl_insn_added
-const struct rtl_hooks sel_rtl_hooks = RTL_HOOKS_INITIALIZER;
 
 
 /* Array containing reverse topological index of function basic blocks,
@@ -575,15 +602,15 @@ fence_clear (fence_t f)
     delete_target_context (tc);
 }
 
-/* Init a list of fences with the head of BB.  */
+/* Init a list of fences with successors of OLD_FENCE.  */
 void
-init_fences (basic_block bb)
+init_fences (insn_t old_fence)
 {
   int succs_num;
   insn_t *succs;
   int i;
 
-  cfg_succs_1 (bb_note (bb), SUCCS_NORMAL | SUCCS_SKIP_TO_LOOP_EXITS, 
+  cfg_succs_1 (old_fence, SUCCS_NORMAL | SUCCS_SKIP_TO_LOOP_EXITS, 
 	       &succs, &succs_num);
 
   gcc_assert (flag_sel_sched_pipelining_outer_loops
@@ -597,10 +624,9 @@ init_fences (basic_block bb)
 		 create_target_context (true) /* tc */,
 		 NULL_RTX /* last_scheduled_insn */, NULL_RTX /* sched_next */,
 		 1 /* cycle */, 0 /* cycle_issued_insns */, 
-		 1 /* starts_cycle_p */, 0 /* after_stall_p */);
-  
+		 1 /* starts_cycle_p */, 0 /* after_stall_p */);  
     }
-  }
+}
 
 /* Add a new fence to NEW_FENCES list, initializing it from all 
    other parameters.  */
@@ -625,7 +651,7 @@ new_fences_add (flist_tail_t new_fences,
     /* Here we should somehow choose between two DFA states.
        Plain reset for now.  */
     {
-      gcc_assert (sel_bb_header_p (FENCE_INSN (f))
+      gcc_assert (sel_bb_head_p (FENCE_INSN (f))
 		  && !sched_next && !FENCE_SCHED_NEXT (f));
 
       state_reset (FENCE_STATE (f));
@@ -798,6 +824,10 @@ static void set_insn_init (expr_t, vinsn
 static void vinsn_attach (vinsn_t);
 static void vinsn_detach (vinsn_t);
 
+/* A vinsn that is used to represent a nop.  This vinsn is shared among all
+   nops sel-sched generates.  */
+static vinsn_t nop_vinsn = NULL;
+
 /* Emit a nop before INSN, taking it from pool.  */
 insn_t
 get_nop_from_pool (insn_t insn)
@@ -811,32 +841,16 @@ get_nop_from_pool (insn_t insn)
     nop = nop_pattern;
 
   insn_init.what = INSN_INIT_WHAT_INSN;
-  nop = emit_insn_after (nop, insn);
+  nop = emit_insn_before (nop, insn);
 
   if (old_p)
-    {
-      vinsn_t vi = GET_VINSN_BY_INSN (nop);
-
-      gcc_assert (vi != NULL);
-
-      GET_VINSN_BY_INSN (nop) = NULL;
-
-      insn_init.todo = INSN_INIT_TODO_SSID;
-      set_insn_init (INSN_EXPR (insn), vi, INSN_SEQNO (insn));
-    }
+    insn_init.todo = INSN_INIT_TODO_SSID;
   else
-    {
-      insn_init.todo = INSN_INIT_TODO_LUID | INSN_INIT_TODO_SSID;
-      set_insn_init (INSN_EXPR (insn), NULL, INSN_SEQNO (insn));
-    }
+    insn_init.todo = INSN_INIT_TODO_LUID | INSN_INIT_TODO_SSID;
 
+  set_insn_init (INSN_EXPR (insn), nop_vinsn, INSN_SEQNO (insn));
   sel_init_new_insns ();
 
-  if (!old_p)
-    /* One more attach to GET_VINSN_BY_INSN to servive
-       sched_sel_remove_insn () in return_nop_to_pool ().  */
-    vinsn_attach (INSN_VINSN (nop));
-
   return nop;
 }
 
@@ -844,12 +858,8 @@ get_nop_from_pool (insn_t insn)
 void
 return_nop_to_pool (insn_t nop)
 {
-  gcc_assert (INSN_VINSN (nop) != NULL);
-
-  GET_VINSN_BY_INSN (nop) = INSN_VINSN (nop);
-
   gcc_assert (INSN_IN_STREAM_P (nop));
-  sched_sel_remove_insn (nop);
+  sel_remove_insn (nop);
 
   if (nop_pool.n == nop_pool.s)
     nop_pool.v = xrealloc (nop_pool.v, ((nop_pool.s = 2 * nop_pool.s + 1)
@@ -862,22 +872,135 @@ return_nop_to_pool (insn_t nop)
 void
 free_nop_pool (void)
 {
-  while (nop_pool.n)
+  nop_pool.n = 0;
+  nop_pool.s = 0;
+  free (nop_pool.v);
+  nop_pool.v = NULL;
+}
+
+
+/* Return 1 if X and Y are identical-looking rtx's.
+   This is the Lisp function EQUAL for rtx arguments.
+   Copied from rtl.c.  The only difference is support for ia64 speculation.  */
+static int
+sel_rtx_equal_p (rtx x, rtx y)
+{
+  int i;
+  int j;
+  enum rtx_code code;
+  const char *fmt;
+
+  if (x == y)
+    return 1;
+  if (x == 0 || y == 0)
+    return 0;
+
+  /* Support ia64 speculation.  */
+  {
+    if (GET_CODE (x) == UNSPEC
+	&& (targetm.sched.skip_rtx_p == NULL
+	    || targetm.sched.skip_rtx_p (x)))
+      return sel_rtx_equal_p (XVECEXP (x, 0, 0), y);
+
+    if (GET_CODE (y) == UNSPEC
+	&& (targetm.sched.skip_rtx_p == NULL
+	    || targetm.sched.skip_rtx_p (y)))
+      return sel_rtx_equal_p (x, XVECEXP (y, 0, 0));
+  }
+
+  code = GET_CODE (x);
+  /* Rtx's of different codes cannot be equal.  */
+  if (code != GET_CODE (y))
+    return 0;
+
+  /* (MULT:SI x y) and (MULT:HI x y) are NOT equivalent.
+     (REG:SI x) and (REG:HI x) are NOT equivalent.  */
+
+  if (GET_MODE (x) != GET_MODE (y))
+    return 0;
+
+  /* Some RTL can be compared nonrecursively.  */
+  switch (code)
     {
-      insn_t nop = nop_pool.v[--nop_pool.n];
-      vinsn_t vi = GET_VINSN_BY_INSN (nop);
+    case REG:
+      return (REGNO (x) == REGNO (y));
+
+    case LABEL_REF:
+      return XEXP (x, 0) == XEXP (y, 0);
 
-      gcc_assert (vi != NULL && VINSN_COUNT (vi) == 1);
-      vinsn_detach (vi);
+    case SYMBOL_REF:
+      return XSTR (x, 0) == XSTR (y, 0);
 
-      GET_VINSN_BY_INSN (nop) = NULL;
+    case SCRATCH:
+    case CONST_DOUBLE:
+    case CONST_INT:
+      return 0;
+
+    default:
+      break;
     }
 
-  nop_pool.s = 0;
-  free (nop_pool.v);
-  nop_pool.v = NULL;
+  /* Compare the elements.  If any pair of corresponding elements
+     fail to match, return 0 for the whole thing.  */
+
+  fmt = GET_RTX_FORMAT (code);
+  for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
+    {
+      switch (fmt[i])
+	{
+	case 'w':
+	  if (XWINT (x, i) != XWINT (y, i))
+	    return 0;
+	  break;
+
+	case 'n':
+	case 'i':
+	  if (XINT (x, i) != XINT (y, i))
+	    return 0;
+	  break;
+
+	case 'V':
+	case 'E':
+	  /* Two vectors must have the same length.  */
+	  if (XVECLEN (x, i) != XVECLEN (y, i))
+	    return 0;
+
+	  /* And the corresponding elements must match.  */
+	  for (j = 0; j < XVECLEN (x, i); j++)
+	    if (sel_rtx_equal_p (XVECEXP (x, i, j), XVECEXP (y, i, j)) == 0)
+	      return 0;
+	  break;
+
+	case 'e':
+	  if (sel_rtx_equal_p (XEXP (x, i), XEXP (y, i)) == 0)
+	    return 0;
+	  break;
+
+	case 'S':
+	case 's':
+	  if ((XSTR (x, i) || XSTR (y, i))
+	      && (! XSTR (x, i) || ! XSTR (y, i)
+		  || strcmp (XSTR (x, i), XSTR (y, i))))
+	    return 0;
+	  break;
+
+	case 'u':
+	  /* These are just backpointers, so they don't matter.  */
+	  break;
+
+	case '0':
+	case 't':
+	  break;
+
+	  /* It is believed that rtx's at this level will never
+	     contain anything but integers and other rtx's,
+	     except for within LABEL_REFs and SYMBOL_REFs.  */
+	default:
+	  gcc_unreachable ();
+	}
+    }
+  return 1;
 }
-
 
 static bool
 vinsn_equal_p (vinsn_t vi1, vinsn_t vi2)
@@ -887,7 +1010,7 @@ vinsn_equal_p (vinsn_t vi1, vinsn_t vi2)
 
   return (VINSN_UNIQUE_P (vi1)
 	  ? VINSN_INSN (vi1) == VINSN_INSN (vi2)
-	  : expr_equal_p (VINSN_PATTERN (vi1), VINSN_PATTERN (vi2)));
+	  : sel_rtx_equal_p (VINSN_PATTERN (vi1), VINSN_PATTERN (vi2)));
 }
 
 /* Returns LHS and RHS are ok to be scheduled separately.  */
@@ -1059,15 +1182,18 @@ sel_vinsn_cost (vinsn_t vi)
   return cost;
 }
 
+static bool insn_is_the_only_one_in_bb_p (insn_t);
+static void init_invalid_data_sets (basic_block);
+
 /* Emit new insn after AFTER based on PATTERN and initialize its data from
    EXPR and SEQNO.  */
 insn_t
-sel_gen_insn_from_rtx_after (rtx pattern, rhs_t expr, int seqno,
-			     insn_t after)
+sel_gen_insn_from_rtx_after (rtx pattern, expr_t expr, int seqno, insn_t after)
 {
   insn_t new_insn;
 
   insn_init.what = INSN_INIT_WHAT_INSN;
+
   new_insn = emit_insn_after (pattern, after);
 
   insn_init.todo = INSN_INIT_TODO_LUID | INSN_INIT_TODO_SSID;
@@ -1077,6 +1203,27 @@ sel_gen_insn_from_rtx_after (rtx pattern
   return new_insn;
 }
 
+/* Force newly generated vinsns to be unique.  */
+static bool init_insn_force_unique_p = false;
+
+/* Emit new speculation recovery insn after AFTER based on PATTERN and
+   initialize its data from EXPR and SEQNO.  */
+insn_t
+sel_gen_recovery_insn_from_rtx_after (rtx pattern, expr_t expr, int seqno,
+				      insn_t after)
+{
+  insn_t insn;
+
+  gcc_assert (!init_insn_force_unique_p);
+
+  init_insn_force_unique_p = true;
+  insn = sel_gen_insn_from_rtx_after (pattern, expr, seqno, after);
+  CANT_MOVE (insn) = 1;
+  init_insn_force_unique_p = false;
+
+  return insn;
+}
+
 /* Emit new insn after AFTER based on EXPR and SEQNO.  */
 insn_t
 sel_gen_insn_from_expr_after (rhs_t expr, int seqno, insn_t after)
@@ -1117,8 +1264,7 @@ vinsns_correlate_as_rhses_p (vinsn_t x, 
       gcc_assert (VINSN_RHS (x));
       gcc_assert (VINSN_RHS (y));
 
-      return expr_equal_p (VINSN_RHS (x), 
-			   VINSN_RHS (y));
+      return sel_rtx_equal_p (VINSN_RHS (x), VINSN_RHS (y));
     }
   else
     /* Compare whole insns. */
@@ -1183,7 +1329,7 @@ merge_expr_data (expr_t to, expr_t from)
     RHS_SCHED_TIMES (to) = RHS_SCHED_TIMES (from);
 
   EXPR_SPEC_DONE_DS (to) = ds_max_merge (EXPR_SPEC_DONE_DS (to),
-					  EXPR_SPEC_DONE_DS (from));
+					 EXPR_SPEC_DONE_DS (from));
 
   EXPR_SPEC_TO_CHECK_DS (to) |= EXPR_SPEC_TO_CHECK_DS (from);
   bitmap_ior_into (EXPR_CHANGED_ON_INSNS (to),
@@ -1200,6 +1346,12 @@ merge_expr (expr_t to, expr_t from)
   gcc_assert (to_vi == from_vi
 	      || vinsns_correlate_as_rhses_p (to_vi, from_vi));
 
+  /* Make sure that speculative pattern is propagated into exprs that
+     have non-speculative one.  This will provide us with consistent
+     speculative bits and speculative patterns inside expr.  */
+  if (EXPR_SPEC_DONE_DS (to) == 0)
+    change_vinsn_in_expr (to, EXPR_VINSN (from));
+
   merge_expr_data (to, from);
 }
 
@@ -1611,15 +1763,21 @@ deps_init_id (idata_t id, insn_t insn, b
 
 
 
-static bool
-sel_cfg_note_p (insn_t insn)
-{
-  return NOTE_INSN_BASIC_BLOCK_P (insn) || LABEL_P (insn);
-}
-
 /* Implement hooks for collecting fundamental insn properties like if insn is
    an ASM or is within a SCHED_GROUP.  */
 
+static void init_invalid_av_set (basic_block);
+
+/* Initialize region-scope data structures for basic blocks.  */
+static void
+init_global_and_expr_for_bb (basic_block bb)
+{
+  if (sel_bb_empty_p (bb))
+    return;
+
+  init_invalid_av_set (bb);
+}
+
 /* Data for global dependency analysis (to initialize CANT_MOVE and
    SCHED_GROUP_P).  */
 static struct
@@ -1633,13 +1791,16 @@ static struct
 static void
 init_global_and_expr_for_insn (insn_t insn)
 {
-  if (sel_cfg_note_p (insn))
+  if (LABEL_P (insn))
     return;
 
-  gcc_assert (INSN_P (insn));
+  if (NOTE_INSN_BASIC_BLOCK_P (insn))
+    {
+      init_global_data.prev_insn = NULL_RTX;
+      return;
+    }
 
-  if (sel_bb_header_p (insn))
-    init_global_data.prev_insn = NULL_RTX;
+  gcc_assert (INSN_P (insn));
 
   if (SCHED_GROUP_P (insn))
     /* Setup a sched_group.  */
@@ -1702,7 +1863,7 @@ sel_init_global_and_expr (bb_vec_t bbs)
     const struct sched_scan_info_def ssi =
       {
 	NULL, /* extend_bb */
-	NULL, /* init_bb */
+	init_global_and_expr_for_bb, /* init_bb */
 	extend_insn, /* extend_insn */
 	init_global_and_expr_for_insn /* init_insn */
       };
@@ -1711,40 +1872,37 @@ sel_init_global_and_expr (bb_vec_t bbs)
   }
 }
 
-/* Perform stage 1 of finalization of the INSN's data.  */
+/* Finalize region-scope data structures for basic blocks.  */
 static void
-finish_global_and_expr_insn_1 (insn_t insn)
+finish_global_and_expr_for_bb (basic_block bb)
 {
-  if (sel_cfg_note_p (insn))
-    return;
-
-  gcc_assert (INSN_P (insn));
-
-  if (INSN_LUID (insn) > 0)
-    av_set_clear (&AV_SET (insn));
-
-  BITMAP_FREE (INSN_ANALYZED_DEPS (insn));
-  BITMAP_FREE (INSN_FOUND_DEPS (insn));
+  av_set_clear (&BB_AV_SET (bb));
+  BB_AV_LEVEL (bb) = 0;
 }
 
-/* Perform stage 2 of finalization of the INSN's data.  */
+/* Finalize INSN's data.  */
 static void
-finish_global_and_expr_insn_2 (insn_t insn)
+finish_global_and_expr_insn (insn_t insn)
 {
-  if (sel_cfg_note_p (insn))
+  if (LABEL_P (insn) || NOTE_INSN_BASIC_BLOCK_P (insn))
     return;
 
   gcc_assert (INSN_P (insn));
 
   if (INSN_LUID (insn) > 0)
     {
+      BITMAP_FREE (INSN_ANALYZED_DEPS (insn));
+      BITMAP_FREE (INSN_FOUND_DEPS (insn));
+
+      INSN_WS_LEVEL (insn) = 0;
+
       gcc_assert (VINSN_COUNT (INSN_VINSN (insn)) == 1);
 
       clear_expr (INSN_EXPR (insn));
     }
 }
 
-static void finish_insn (void);
+static void finish_insns (void);
 
 /* Finalize per instruction data for the whole region.  */
 void
@@ -1759,30 +1917,14 @@ sel_finish_global_and_expr (void)
     for (i = 0; i < current_nr_blocks; i++)
       VEC_quick_push (basic_block, bbs, BASIC_BLOCK (BB_TO_BLOCK (i)));
 
-    /* Before cleaning up insns exprs we first must clean all the cached
-       av_set.  */
-
-    /* Clear INSN_AVs.  */
+    /* Clear AV_SETs and INSN_EXPRs.  */
     {
       const struct sched_scan_info_def ssi =
 	{
 	  NULL, /* extend_bb */
-	  NULL, /* init_bb */
+	  finish_global_and_expr_for_bb, /* init_bb */
 	  NULL, /* extend_insn */
-	  finish_global_and_expr_insn_1 /* init_insn */
-	};
-
-      sched_scan (&ssi, bbs, NULL, NULL, NULL);
-    }
-
-    /* Clear INSN_EXPRs.  */
-    {
-      const struct sched_scan_info_def ssi =
-	{
-	  NULL, /* extend_bb */
-	  NULL, /* init_bb */
-	  NULL, /* extend_insn */
-	  finish_global_and_expr_insn_2 /* init_insn */
+	  finish_global_and_expr_insn /* init_insn */
 	};
 
       sched_scan (&ssi, bbs, NULL, NULL, NULL);
@@ -1791,7 +1933,7 @@ sel_finish_global_and_expr (void)
     VEC_free (basic_block, heap, bbs);
   }
 
-  finish_insn ();
+  finish_insns ();
 }
 
 /* In the below hooks, we merely calculate whether or not a dependence 
@@ -1931,6 +2073,20 @@ has_dependence_note_reg_use (int regno)
 
       if (reg_last->clobbers)
 	*dsp = (*dsp & ~SPECULATIVE) | DEP_ANTI;
+
+      /* Handle BE_IN_SPEC.  */
+      if (reg_last->uses)
+	{
+	  ds_t pro_spec_checked_ds;
+
+	  pro_spec_checked_ds = INSN_SPEC_CHECKED_DS (has_dependence_data.pro);
+	  pro_spec_checked_ds = ds_get_max_dep_weak (pro_spec_checked_ds);
+
+	  if (pro_spec_checked_ds != 0)
+	    /* Merge BE_IN_SPEC bits into *DSP.  */
+	    *dsp = ds_full_merge (*dsp, pro_spec_checked_ds,
+				  NULL_RTX, NULL_RTX);
+	}
     }
 }
 
@@ -2280,46 +2436,97 @@ bookkeeping_can_be_created_if_moved_thro
     return false;
 
   FOR_EACH_SUCC (succ, si, jump)
-    if (num_preds_gt_1 (succ))
+    if (sel_num_cfg_preds_gt_1 (succ))
       return true;
 
   return false;
 }
 
+/* Return 'true' if INSN is the only one in its basic block.  */
+static bool
+insn_is_the_only_one_in_bb_p (insn_t insn)
+{
+  return sel_bb_head_p (insn) && sel_bb_end_p (insn);
+}
+
+static void sel_add_or_remove_bb (basic_block, int);
+static void free_data_sets (basic_block);
+static void move_bb_info (basic_block, basic_block);
+static void remove_empty_bb (basic_block, bool);
+
 /* Rip-off INSN from the insn stream.  */
 void
-sched_sel_remove_insn (insn_t insn)
+sel_remove_insn (insn_t insn)
 {
-  gcc_assert (AV_SET (insn) == NULL && !INSN_AV_VALID_P (insn)
-	      && !LV_SET_VALID_P (insn));
+  basic_block bb = BLOCK_FOR_INSN (insn);
 
-  if (INSN_IN_STREAM_P (insn))
-    remove_insn (insn);
+  gcc_assert (INSN_IN_STREAM_P (insn));
+  remove_insn (insn);
 
   /* It is necessary to null this fields before calling add_insn ().  */
   PREV_INSN (insn) = NULL_RTX;
   NEXT_INSN (insn) = NULL_RTX;
 
   clear_expr (INSN_EXPR (insn));
-}
 
-/* Transfer av and lv sets from FROM to TO.  */
-void
-transfer_data_sets (insn_t to, insn_t from)
-{
-  /* We used to assert !INSN_AV_VALID_P here, but this is wrong when 
-     during previous compute_av_set the window size was reached 
-     exactly at TO.  In this case, AV_SET (to) would be NULL.  */
-  gcc_assert (AV_SET (to) == NULL && !LV_SET_VALID_P (to));
+  if (sel_bb_empty_p (bb))
+    /* Get rid of empty BB.  */
+    {
+      free_data_sets (bb);
+
+      if (single_succ_p (bb))
+	{
+	  basic_block succ_bb;
+	  bool rescan_p;
+	  basic_block pred_bb;
+
+	  succ_bb = single_succ (bb);
+	  rescan_p = true;
+	  pred_bb = NULL;
 
-  AV_SET (to) = AV_SET (from);
-  AV_SET (from) = NULL;
+	  /* Redirect all non-fallthru edges to the next bb.  */
+	  while (rescan_p)
+	    {
+	      edge e;
+	      edge_iterator ei;
+
+	      rescan_p = false;
+
+	      FOR_EACH_EDGE (e, ei, bb->preds)
+		{
+		  pred_bb = e->src;
+
+		  if (!(e->flags & EDGE_FALLTHRU))
+		    {
+		      sel_redirect_edge_and_branch (e, succ_bb);
+		      rescan_p = true;
+		      break;
+		    }
+		}
+	    }
+
+	  /* If it is possible - merge BB with its predecessor.  */
+	  if (can_merge_blocks_p (bb->prev_bb, bb))
+	    sel_merge_blocks (bb->prev_bb, bb);
+	  else
+	    /* Otherwise this is a block without fallthru predecessor.
+	       Just delete it.  */
+	    {
+	      gcc_assert (pred_bb != NULL);
 
-  AV_LEVEL (to) = AV_LEVEL (from);
-  AV_LEVEL (from) = 0;
+	      move_bb_info (pred_bb, bb);
+	      remove_empty_bb (bb, true);
+	    }
+	}
+      else
+	/* Do not delete BB if it has more than one successor.
+	   That can occur when we moving a jump.  */
+	{
+	  gcc_assert (can_merge_blocks_p (bb->prev_bb, bb));
 
-  LV_SET (to) = LV_SET (from);
-  LV_SET (from) =  NULL;
+	  sel_merge_blocks (bb->prev_bb, bb);
+	}
+    }
 }
 
 /* Estimate number of the insns in BB.  */
@@ -2378,7 +2585,7 @@ get_seqno_of_a_pred (insn_t insn)
 
   gcc_assert (INSN_SIMPLEJUMP_P (insn));
 
-  if (!sel_bb_header_p (insn))
+  if (!sel_bb_head_p (insn))
     seqno = INSN_SEQNO (PREV_INSN (insn));
   else
     {
@@ -2417,8 +2624,8 @@ get_seqno_of_a_pred (insn_t insn)
   {
     insn_t succ = cfg_succ (insn);
 
-    gcc_assert ((succ != NULL && seqno <= INSN_SEQNO (succ))
-		|| (succ == NULL && flag_sel_sched_pipelining_outer_loops));
+    gcc_assert (succ != NULL
+		|| flag_sel_sched_pipelining_outer_loops);
   }
 #endif
 
@@ -2442,7 +2649,7 @@ extend_insn (void)
 
 /* Finalize data structures for insns from current region.  */
 static void
-finish_insn (void)
+finish_insns (void)
 {
   VEC_free (sel_insn_data_def, heap, s_i_d);
   deps_finish_d_i_d ();
@@ -2450,6 +2657,15 @@ finish_insn (void)
 
 static insn_vec_t new_insns = NULL;
 
+/* This variable is used to ensure that no insns will be emitted by
+   outer-world functions like redirect_edge_and_branch ().  */
+static bool can_add_insns_p = true;
+
+/* The same as the previous flag except that notes are allowed 
+   to be emitted.  
+   FIXME: avoid this dependency between files.  */
+bool can_add_real_insns_p = true;
+
 /* An implementation of RTL_HOOKS_INSN_ADDED hook.  The hook is used for 
    initializing data structures when new insn is emitted.
    This hook remembers all relevant instuctions which can be initialized later
@@ -2460,27 +2676,50 @@ sel_rtl_insn_added (insn_t insn)
   gcc_assert (can_add_insns_p
 	      && (!INSN_P (insn) || can_add_real_insns_p));
 
+  if (INSN_P (insn)
+      && INSN_IN_STREAM_P (insn)
+      && insn_is_the_only_one_in_bb_p (insn))
+    {
+      extend_bb_info ();
+      init_invalid_data_sets (BLOCK_FOR_INSN (insn));
+    }
+
   if (!INSN_P (insn) || insn_init.what == INSN_INIT_WHAT_INSN_RTX)
     return;
 
-  gcc_assert (BLOCK_FOR_INSN (insn) == NULL
-	      || (VEC_length (sel_bb_info_def, sel_bb_info)
-		  <= (unsigned) BLOCK_NUM (insn))
-	      || (CONTAINING_RGN (BB_TO_BLOCK (0)) 
-		  == CONTAINING_RGN (BLOCK_NUM (insn))));
-
   /* Initialize a bit later because something (e.g. CFG) is not
      consistent yet.  These insns will be initialized when
      sel_init_new_insns () is called.  */
   VEC_safe_push (rtx, heap, new_insns, insn);
 }
 
+/* Save original RTL hooks here.  */
+static struct rtl_hooks orig_rtl_hooks;
+
+/* Redefine RTL hooks so we can catch the moment of creating an insn.  */
+#undef RTL_HOOKS_INSN_ADDED
+#define RTL_HOOKS_INSN_ADDED sel_rtl_insn_added
+static const struct rtl_hooks sel_rtl_hooks = RTL_HOOKS_INITIALIZER;
+
+void
+sel_register_rtl_hooks (void)
+{
+  orig_rtl_hooks = rtl_hooks;
+  rtl_hooks = sel_rtl_hooks;
+}
+
+void
+sel_unregister_rtl_hooks (void)
+{
+  rtl_hooks = orig_rtl_hooks;
+}
+
 /* A proxy to pass initialization data to init_insn ().  */
 static sel_insn_data_def _insn_init_ssid;
 static sel_insn_data_t insn_init_ssid = &_insn_init_ssid;
 
-/* A dummy variable used in set_insn_init () and init_insn ().  */
-static vinsn_t empty_vinsn = NULL;
+/* If true create a new vinsn.  Otherwise use the one from EXPR.  */
+static bool insn_init_create_new_vinsn_p;
 
 /* Set all necessary data for initialization of the new insn[s].  */
 static void
@@ -2491,9 +2730,12 @@ set_insn_init (expr_t expr, vinsn_t vi, 
   copy_expr (x, expr);
 
   if (vi != NULL)
-    change_vinsn_in_expr (x, vi);
+    {
+      insn_init_create_new_vinsn_p = false;
+      change_vinsn_in_expr (x, vi);
+    }
   else
-    change_vinsn_in_expr (x, empty_vinsn);
+    insn_init_create_new_vinsn_p = true;
 
   insn_init_ssid->seqno = seqno;
 }
@@ -2509,7 +2751,6 @@ init_insn (insn_t insn)
   /* The fields mentioned below are special and hence are not being
      propagated to the new insns.  */
   gcc_assert (!ssid->asm_p && ssid->sched_next == NULL
-	      && ssid->av_level == 0 && ssid->av == NULL
 	      && !ssid->after_stall_p && ssid->sched_cycle == 0);
 
   gcc_assert (INSN_P (insn) && INSN_LUID (insn) > 0);
@@ -2519,8 +2760,8 @@ init_insn (insn_t insn)
 
   copy_expr (expr, x);
 
-  if (EXPR_VINSN (x) == empty_vinsn)
-    change_vinsn_in_expr (expr, vinsn_create (insn, false));
+  if (insn_init_create_new_vinsn_p)
+    change_vinsn_in_expr (expr, vinsn_create (insn, init_insn_force_unique_p));
 
   INSN_SEQNO (insn) = ssid->seqno;
 
@@ -2529,20 +2770,10 @@ init_insn (insn_t insn)
 }
 
 /* This is used to initialize spurious jumps generated by
-   sel_split_block () / sel_redirect_edge ().  */
+   sel_redirect_edge ().  */
 static void
 init_simplejump (insn_t insn)
 {
-  rtx succ = cfg_succ_1 (insn, SUCCS_ALL);
-
-  gcc_assert (LV_SET (insn) == NULL);
-
-  if (sel_bb_header_p (insn))
-    {
-      LV_SET (insn) = get_regset_from_pool ();
-      COPY_REG_SET (LV_SET (insn), LV_SET (succ));
-    }
-
   init_expr (INSN_EXPR (insn), vinsn_create (insn, false), 0, 0, 0, 
              0, 0, NULL);
 
@@ -2552,31 +2783,6 @@ init_simplejump (insn_t insn)
   INSN_FOUND_DEPS (insn) = BITMAP_ALLOC (NULL);
 }
 
-/* This is used to move lv_sets to the first insn of basic block if that
-   insn was emitted by the target.  */
-static void
-insn_init_move_lv_set_if_bb_header (insn_t insn)
-{
-  if (sel_bb_header_p (insn))
-    {
-      insn_t next = NEXT_INSN (insn);
-
-      gcc_assert (INSN_LUID (insn) == 0);
-
-      /* Find the insn that used to be a bb_header.  */
-      while (INSN_LUID (next) == 0)
-	{
-	  gcc_assert (!sel_bb_end_p (next));
-	  next = NEXT_INSN (next);
-	}
-
-      gcc_assert (LV_SET_VALID_P (next));
-
-      LV_SET (insn) = LV_SET (next);
-      LV_SET (next) = NULL;
-    }
-}
-
 /* Perform deferred initialization of insns.  This is used to process 
    a new jump that may be created by redirect_edge.  */
 void
@@ -2614,19 +2820,19 @@ sel_init_new_insns (void)
 
       sched_scan (&ssi, NULL, NULL, new_insns, NULL);
     }
-  
-  if (todo & INSN_INIT_TODO_MOVE_LV_SET_IF_BB_HEADER)
-    {
-      const struct sched_scan_info_def ssi =
-	{
-	  NULL, /* extend_bb */
-	  NULL, /* init_bb */
-	  sel_extend_insn_rtx_data, /* extend_insn */
-	  insn_init_move_lv_set_if_bb_header /* init_insn */
-	};
 
-      sched_scan (&ssi, NULL, NULL, new_insns, NULL);
-    }
+#ifdef ENABLE_CHECKING
+  /* Check that all insns were emitted to the current_region.  */
+  {
+    unsigned i;
+    insn_t insn;
+    int current_region = CONTAINING_RGN (BB_TO_BLOCK (0));
+
+    for (i = 0; VEC_iterate (rtx, new_insns, i, insn); i++)
+      gcc_assert (CONTAINING_RGN (BLOCK_NUM (insn))
+		  == current_region);
+  }
+#endif
 
   VEC_truncate (rtx, new_insns, 0);
 }
@@ -2682,14 +2888,20 @@ vinsn_dfa_cost (vinsn_t vinsn, fence_t f
 
 /* Functions to init/finish work with lv sets.  */
 
-/* Init LV_SET of INSN from a global_live_at_start set of BB.
+/* Init BB_LV_SET of BB from a global_live_at_start set of BB.
    NOTE: We do need to detach register live info from bb because we
-   use those regsets as LV_SETs.  */
+   use those regsets as BB_LV_SETs.  */
 static void
-init_lv_set_for_insn (insn_t insn, basic_block bb)
+init_lv_set (basic_block bb)
 {
-  LV_SET (insn) = get_regset_from_pool ();
-  COPY_REG_SET (LV_SET (insn), glat_start[bb->index]);
+  gcc_assert (!BB_LV_SET_VALID_P (bb));
+
+  if (sel_bb_empty_p (bb))
+    return;
+
+  BB_LV_SET (bb) = get_regset_from_pool ();
+  COPY_REG_SET (BB_LV_SET (bb), glat_start[bb->index]);
+  BB_LV_SET_VALID_P (bb) = true;
 }
 
 /* Initialize lv set of all bb headers.  */
@@ -2698,39 +2910,23 @@ init_lv_sets (void)
 {
   basic_block bb;
 
-  /* Initialization of the LV sets.  */
+  /* Initialize of LV sets.  */
   FOR_EACH_BB (bb)
-    {
-      insn_t head;
-      insn_t tail;
-
-      get_ebb_head_tail (bb, bb, &head, &tail);
-
-      if (/* BB has at least one insn.  */
-	  INSN_P (head))
-	init_lv_set_for_insn (head, bb);
-    }
+    init_lv_set (bb);
 
-  /* Don't forget EXIT_INSN.  */
-  init_lv_set_for_insn (exit_insn, EXIT_BLOCK_PTR);
+  /* Don't forget EXIT_BLOCK.  */
+  init_lv_set (EXIT_BLOCK_PTR);
 }
 
 /* Release lv set of HEAD.  */
 static void
-release_lv_set_for_insn (rtx head)
+free_lv_set (basic_block bb)
 {
-  int uid = INSN_UID (head);
-  
-  if (((unsigned) uid) < VEC_length (sel_insn_rtx_data_def, s_i_r_d))
-    {
-      regset lv = LV_SET (head);
+  gcc_assert (BB_LV_SET (bb) != NULL);
 
-      if (lv != NULL)
-	{
-	  return_regset_to_pool (lv);
-	  LV_SET (head) = NULL;
-	}
-    }
+  return_regset_to_pool (BB_LV_SET (bb));
+  BB_LV_SET (bb) = NULL;
+  BB_LV_SET_VALID_P (bb) = false;
 }
 
 /* Finalize lv sets of all bb headers.  */
@@ -2739,27 +2935,140 @@ free_lv_sets (void)
 {
   basic_block bb;
 
-  gcc_assert (LV_SET_VALID_P (exit_insn));
-  release_lv_set_for_insn (exit_insn);
+  /* Don't forget EXIT_BLOCK.  */
+  free_lv_set (EXIT_BLOCK_PTR);
 
-  /* !!! FIXME: Walk through bb_headers only.  */
+  /* Free LV sets.  */
   FOR_EACH_BB (bb)
-    {
-      insn_t head;
-      insn_t next_tail;
+    if (!sel_bb_empty_p (bb))
+      free_lv_set (bb);
+}
 
-      get_ebb_head_tail (bb, bb, &head, &next_tail);
-      next_tail = NEXT_INSN (next_tail);
+/* Initialize an invalid LV_SET for BB.
+   This set will be updated next time compute_live () process BB.  */
+static void
+init_invalid_lv_set (basic_block bb)
+{
+  gcc_assert (BB_LV_SET (bb) == NULL
+	      && BB_LV_SET_VALID_P (bb) == false);
 
-      /* We should scan through all the insns because bundling could
-	 have emitted new insns at the bb headers.  */
-      while (head != next_tail)
-	{
-          release_lv_set_for_insn (head);
-	  head = NEXT_INSN (head);
-	}
-    }
+  BB_LV_SET (bb) = get_regset_from_pool ();
 }
+
+/* Initialize an invalid AV_SET for BB.
+   This set will be updated next time compute_av () process BB.  */
+static void
+init_invalid_av_set (basic_block bb)
+{
+  gcc_assert (BB_AV_LEVEL (bb) == 0
+	      && BB_AV_SET (bb) == NULL);
+
+  BB_AV_LEVEL (bb) = -1;
+}
+
+/* Initialize invalid data sets for INSN.
+   These sets will be updated next time update_data_sets () process INSN.  */
+static void
+init_invalid_data_sets (basic_block bb)
+{
+  init_invalid_lv_set (bb);
+  init_invalid_av_set (bb);
+}
+
+/* Free av set of BB.  */
+static void
+free_av_set (basic_block bb)
+{
+  av_set_clear (&BB_AV_SET (bb));
+  BB_AV_LEVEL (bb) = 0;
+}
+
+/* Free data sets of BB.  */
+static void
+free_data_sets (basic_block bb)
+{
+  free_lv_set (bb);
+  free_av_set (bb);
+}
+
+/* Exchange lv sets of TO and FROM.  */
+static void
+exchange_lv_sets (basic_block to, basic_block from)
+{
+  {
+    regset to_lv_set = BB_LV_SET (to);
+
+    BB_LV_SET (to) = BB_LV_SET (from);
+    BB_LV_SET (from) = to_lv_set;
+  }
+
+  {
+    bool to_lv_set_valid_p = BB_LV_SET_VALID_P (to);
+
+    BB_LV_SET_VALID_P (to) = BB_LV_SET_VALID_P (from);
+    BB_LV_SET_VALID_P (from) = to_lv_set_valid_p;
+  }
+}
+
+
+/* Exchange av sets of TO and FROM.  */
+static void
+exchange_av_sets (basic_block to, basic_block from)
+{
+  {
+    av_set_t to_av_set = BB_AV_SET (to);
+
+    BB_AV_SET (to) = BB_AV_SET (from);
+    BB_AV_SET (from) = to_av_set;
+  }
+
+  {
+    int to_av_level = BB_AV_LEVEL (to);
+
+    BB_AV_LEVEL (to) = BB_AV_LEVEL (from);
+    BB_AV_LEVEL (from) = to_av_level;
+  }
+}
+
+/* Exchange data sets of TO and FROM.  */
+static void
+exchange_data_sets (basic_block to, basic_block from)
+{
+  exchange_lv_sets (to, from);
+  exchange_av_sets (to, from);
+}
+
+av_set_t
+get_av_set (insn_t insn)
+{
+  av_set_t av_set;
+
+  gcc_assert (AV_SET_VALID_P (insn));
+
+  if (sel_bb_head_p (insn))
+    av_set = BB_AV_SET (BLOCK_FOR_INSN (insn));
+  else
+    av_set = NULL;
+
+  return av_set;
+}
+
+/* Implementation of AV_LEVEL () macro.  Return AV_LEVEL () of INSN.  */
+int
+get_av_level (insn_t insn)
+{
+  int av_level;
+
+  gcc_assert (INSN_P (insn));
+
+  if (sel_bb_head_p (insn))
+    av_level = BB_AV_LEVEL (BLOCK_FOR_INSN (insn));
+  else
+    av_level = INSN_WS_LEVEL (insn);
+
+  return av_level;
+}
+
 
 
 /* Variables to work with control-flow graph.  */
@@ -2770,86 +3079,46 @@ static VEC (basic_block, heap) *last_add
 
 /* Functions to work with control-flow graph.  */
 
-/* Return the first real insn of BB.  If STRICT_P is true, then assume
-   that BB is current region and hence has no unrelevant notes in it.  */
-static insn_t
-sel_bb_header_1 (basic_block bb, bool strict_p)
+/* Return basic block note of BB.  */
+insn_t
+sel_bb_head (basic_block bb)
 {
-  insn_t header;
+  insn_t head;
 
   if (bb == EXIT_BLOCK_PTR)
     {
       gcc_assert (exit_insn != NULL_RTX);
-      header = exit_insn;
+      head = exit_insn;
     }
   else
     {
-      if (strict_p)
-	{
-	  rtx note = bb_note (bb);
-
-	  if (note != BB_END (bb))
-	    header = NEXT_INSN (note);
-	  else
-	    header = NULL_RTX;
-	}
-      else
-	{
-	  rtx head, tail;
+      insn_t note;
 
-	  get_ebb_head_tail (bb, bb, &head, &tail);
+      note = bb_note (bb);
+      head = next_nonnote_insn (note);
 
-	  if (INSN_P (head))
-	    header = head;
-	  else
-	    header = NULL_RTX;
-	}
+      if (BLOCK_FOR_INSN (head) != bb)
+	head = NULL_RTX;
     }
 
-  return header;
-}
-
-/* Return the first real insn of BB.  */
-insn_t
-sel_bb_header (basic_block bb)
-{
-  insn_t header = sel_bb_header_1 (bb, true);
-
-  gcc_assert (header == NULL_RTX || INSN_P (header));
-
-  return header;
+  return head;
 }
 
 /* Return true if INSN is a basic block header.  */
 bool
-sel_bb_header_p (insn_t insn)
+sel_bb_head_p (insn_t insn)
 {
-  gcc_assert (insn != NULL_RTX && INSN_P (insn));
-
-  return insn == sel_bb_header (BLOCK_FOR_INSN (insn));
-}
-
-/* Return true if BB has no real insns.  If STRICT_P is true, then assume
-   that BB is current region and hence has no unrelevant notes in it.  */
-bool
-sel_bb_empty_p_1 (basic_block bb, bool strict_p)
-{
-  return sel_bb_header_1 (bb, strict_p) == NULL_RTX;
-}
-
-/* Return true if BB has no real insns.  If STRICT_P is true, then assume
-   that BB is current region and hence has no unrelevant notes in it.  */
-bool
-sel_bb_empty_p (basic_block bb)
-{
-  return sel_bb_empty_p_1 (bb, true);
+  return sel_bb_head (BLOCK_FOR_INSN (insn)) == insn;
 }
 
 /* Return last insn of BB.  */
 insn_t
 sel_bb_end (basic_block bb)
 {
-  gcc_assert (!sel_bb_empty_p (bb));
+  if (sel_bb_empty_p (bb))
+    return NULL_RTX;
+
+  gcc_assert (bb != EXIT_BLOCK_PTR);
 
   return BB_END (bb);
 }
@@ -2861,6 +3130,13 @@ sel_bb_end_p (insn_t insn)
   return insn == sel_bb_end (BLOCK_FOR_INSN (insn));
 }
 
+/* Return true if BB consist of single NOTE_INSN_BASIC_BLOCK.  */
+bool
+sel_bb_empty_p (basic_block bb)
+{
+  return sel_bb_head (bb) == NULL;
+}
+
 /* True when BB belongs to the current scheduling region.  */
 bool
 in_current_region_p (basic_block bb)
@@ -2871,13 +3147,6 @@ in_current_region_p (basic_block bb)
   return CONTAINING_RGN (bb->index) == CONTAINING_RGN (BB_TO_BLOCK (0));
 }
 
-/* Extend per bb data structures.  */
-static void
-extend_bb (void)
-{
-  VEC_safe_grow_cleared (sel_bb_info_def, heap, sel_bb_info, last_basic_block);
-}
-
 /* Remove all notes from BB.  */
 static void
 init_bb (basic_block bb)
@@ -2891,7 +3160,7 @@ sel_init_bbs (bb_vec_t bbs, basic_block 
 {
   const struct sched_scan_info_def ssi =
     {
-      extend_bb, /* extend_bb */
+      extend_bb_info, /* extend_bb */
       init_bb, /* init_bb */
       NULL, /* extend_insn */
       NULL /* init_insn */
@@ -2937,7 +3206,7 @@ sel_finish_bbs (void)
   if (flag_sel_sched_pipelining_outer_loops && current_loop_nest)
     sel_remove_loop_preheader ();
 
-  VEC_free (sel_bb_info_def, heap, sel_bb_info);
+  finish_region_bb_info ();
 }
 
 /* Return a number of INSN's successors honoring FLAGS.  */
@@ -3072,11 +3341,11 @@ cfg_preds (basic_block bb, insn_t **pred
 /* Returns true if we are moving INSN through join point.
    !!! Rewrite me: this should use cfg_preds ().  */
 bool
-num_preds_gt_1 (insn_t insn)
+sel_num_cfg_preds_gt_1 (insn_t insn)
 {
   basic_block bb;
 
-  if (!sel_bb_header_p (insn) || INSN_BB (insn) == 0)
+  if (!sel_bb_head_p (insn) || INSN_BB (insn) == 0)
     return false;
 
   bb = BLOCK_FOR_INSN (insn);
@@ -3206,22 +3475,6 @@ in_same_ebb_p (insn_t insn, insn_t succ)
   return false;
 }
 
-/* An implementation of create_basic_block hook, which additionally updates 
-   per-bb data structures.  */
-basic_block
-sel_create_basic_block (void *headp, void *endp, basic_block after)
-{
-  basic_block new_bb;
-  
-  gcc_assert (flag_sel_sched_pipelining_outer_loops 
-              || last_added_blocks == NULL);
-
-  new_bb = old_create_basic_block (headp, endp, after);
-  VEC_safe_push (basic_block, heap, last_added_blocks, new_bb);
-
-  return new_bb;
-}
-
 /* Recomputes the reverse topological order for the function and
    saves it in REV_TOP_ORDER_INDEX.  REV_TOP_ORDER_INDEX_LEN is also
    modified appropriately.  */
@@ -3265,6 +3518,48 @@ clear_outdated_rtx_info (basic_block bb)
       SCHED_GROUP_P (insn) = 0;
 }
 
+typedef VEC(rtx, heap) *rtx_vec_t;
+
+static rtx_vec_t bb_note_pool;
+
+/* Add BB_NOTE to the pool of available basic block notes.  */
+static void
+return_bb_to_pool (basic_block bb)
+{
+  rtx note = bb_note (bb);
+
+  gcc_assert (NOTE_BASIC_BLOCK (note) == bb
+	      && bb->aux == NULL);
+
+  /* It turns out that current cfg infrastructure does not support
+     reuse of basic blocks.  Don't bother for now.  */
+  /*VEC_safe_push (rtx, heap, bb_note_pool, note);*/
+}
+
+/* Get a bb_note from pool or return NULL_RTX if pool is empty.  */
+static rtx
+get_bb_note_from_pool (void)
+{
+  if (VEC_empty (rtx, bb_note_pool))
+    return NULL_RTX;
+  else
+    {
+      rtx note = VEC_pop (rtx, bb_note_pool);
+
+      PREV_INSN (note) = NULL_RTX;
+      NEXT_INSN (note) = NULL_RTX;
+
+      return note;
+    }
+}
+
+/* Free bb_note_pool.  */
+void
+free_bb_note_pool (void)
+{
+  VEC_free (rtx, heap, bb_note_pool);
+}
+
 /* Returns a position in RGN where BB can be inserted retaining 
    topological order.  */
 static int
@@ -3405,7 +3700,7 @@ sel_add_or_remove_bb_1 (basic_block bb, 
 /* Add (remove depending on ADD) BB to (from) the current region 
    and update all data.  If BB is NULL, add all blocks from 
    last_added_blocks vector.  */
-void
+static void
 sel_add_or_remove_bb (basic_block bb, int add)
 {
   if (add > 0)
@@ -3444,8 +3739,39 @@ sel_add_or_remove_bb (basic_block bb, in
     {
       sel_add_or_remove_bb_1 (bb, add);
 
-      if (add < 0)
-	delete_basic_block (bb);
+      if (add > 0 && !sel_bb_empty_p (bb)
+	  && BB_LV_SET (bb) == NULL)
+	/* ??? We associate creating/deleting data sets with the first insn
+	   appearing / disappearing in the bb.  This is not a clean way to
+	   implement infrastructure for handling data sets because we often
+	   create new basic blocks with instructions already inside it.
+	   That could be made cleaner in two ways:
+	   1. Have the only primitive for basic block creation:
+	   sel_create_basic_block () and then fill the new basic block with
+	   move_insns_to_bb ().
+	   2. Or associate data sets with bb notes.  */
+	init_invalid_data_sets (bb);
+
+      if (add <= 0)
+	{
+	  return_bb_to_pool (bb);
+
+	  if (add < 0)
+	    {
+	      gcc_assert (sel_bb_empty_p (bb));
+
+	      /* Can't assert av_set properties when (add == 0) because
+		 we use sel_add_or_remove_bb (bb, 0) when removing loop
+		 preheader from the region.  At the point of removing the
+		 preheader we already have deallocated sel_region_bb_info.  */
+	      gcc_assert (BB_LV_SET (bb) == NULL
+			  && !BB_LV_SET_VALID_P (bb)
+			  && BB_AV_LEVEL (bb) == 0
+			  && BB_AV_SET (bb) == NULL);
+
+	      delete_basic_block (bb);
+	    }
+	}
     }
   else
     /* BB is NULL - process LAST_ADDED_BLOCKS instead.  */
@@ -3471,6 +3797,28 @@ sel_add_or_remove_bb (basic_block bb, in
     }
 
   rgn_setup_region (CONTAINING_RGN (bb->index));
+
+#ifdef ENABLE_CHECKING
+  /* This check is verifies that all jumps jump where they should.
+     This code is adopted from flow.c: init_propagate_block_info ().  */
+  {
+    basic_block bb;
+
+    FOR_EACH_BB (bb)
+      {
+	if (JUMP_P (BB_END (bb))
+	    && any_condjump_p (BB_END (bb)))
+	  {
+	    if (!single_succ_p (bb))
+	      gcc_assert (EDGE_SUCC (bb, 0)->flags & EDGE_FALLTHRU
+			  || EDGE_SUCC (bb, 1)->flags & EDGE_FALLTHRU);
+	    else
+	      gcc_assert (JUMP_LABEL (BB_END (bb))
+			  == BB_HEAD (EDGE_SUCC (bb, 0)->dest));
+	  }
+      }
+  }
+#endif
 }
 
 /* A wrapper for create_basic_block_before, which also extends per-bb 
@@ -3503,6 +3851,18 @@ sel_create_basic_block_before (basic_blo
   return bb;
 }
 
+/* Concatenate info of EMPTY_BB to info of MERGE_BB.  */
+static void
+move_bb_info (basic_block merge_bb, basic_block empty_bb)
+{
+  gcc_assert (in_current_region_p (merge_bb));
+
+  concat_note_lists (BB_NOTE_LIST (empty_bb), 
+		     &BB_NOTE_LIST (merge_bb));
+  BB_NOTE_LIST (empty_bb) = NULL_RTX;
+
+}
+
 /* Remove an empty basic block EMPTY_BB.  When MERGE_UP_P is true, we put 
    EMPTY_BB's note lists into its predecessor instead of putting them 
    into the successor.  */
@@ -3527,12 +3887,16 @@ sel_remove_empty_bb (basic_block empty_b
 		  && EDGE_SUCC (empty_bb, 0)->dest == merge_bb);
     }
 
-  gcc_assert (in_current_region_p (merge_bb));
+  move_bb_info (merge_bb, empty_bb);
 
-  concat_note_lists (BB_NOTE_LIST (empty_bb), 
-		     &BB_NOTE_LIST (merge_bb));
-  BB_NOTE_LIST (empty_bb) = NULL_RTX;
+  remove_empty_bb (empty_bb, remove_from_cfg_p);
+}
 
+/* Remove EMPTY_BB.  If REMOVE_FROM_CFG_P is false, remove EMPTY_BB from
+   region, but keep it in CFG.  */
+static void
+remove_empty_bb (basic_block empty_bb, bool remove_from_cfg_p)
+{
   /* Fixup CFG.  */
 
   gcc_assert (/* The BB contains just a bb note ...  */
@@ -3603,6 +3967,44 @@ sel_remove_empty_bb (basic_block empty_b
   sel_add_or_remove_bb (empty_bb, remove_from_cfg_p ? -1 : 0);
 }
 
+static struct cfg_hooks orig_cfg_hooks;
+
+/* An implementation of create_basic_block hook, which additionally updates 
+   per-bb data structures.  */
+static basic_block
+sel_create_basic_block (void *headp, void *endp, basic_block after)
+{
+  basic_block new_bb;
+  insn_t new_bb_note;
+  
+  gcc_assert (flag_sel_sched_pipelining_outer_loops 
+              || last_added_blocks == NULL);
+
+  new_bb_note = get_bb_note_from_pool ();
+
+  if (new_bb_note == NULL_RTX)
+    new_bb = orig_cfg_hooks.create_basic_block (headp, endp, after);
+  else
+    {
+      new_bb = create_basic_block_structure (headp, endp,
+					     new_bb_note, after);
+      new_bb->aux = NULL;
+    }
+
+  VEC_safe_push (basic_block, heap, last_added_blocks, new_bb);
+
+  return new_bb;
+}
+
+/* Implement sched_init_only_bb ().  */
+static void
+sel_init_only_bb (basic_block bb, basic_block after)
+{
+  gcc_assert (after == NULL);
+
+  rgn_make_new_region_out_of_new_block (bb);
+}
+
 /* Update the latch when we've splitted or merged it.
    This should be checked for all outer loops, too.  */
 static void
@@ -3627,20 +4029,30 @@ change_loops_latches (basic_block from, 
 
 /* Splits BB on two basic blocks, adding it to the region and extending 
    per-bb data structures.  Returns the newly created bb.  */
-basic_block
-sel_split_block (basic_block bb, insn_t after)
+static basic_block
+sel_split_block (basic_block bb, rtx after)
 {
   basic_block new_bb;
 
   can_add_real_insns_p = false;
-  new_bb = split_block (bb, after)->dest;
+  new_bb = sched_split_block_1 (bb, after);
   can_add_real_insns_p = true;
 
   change_loops_latches (bb, new_bb);
 
   sel_add_or_remove_bb (new_bb, 1);
 
-  gcc_assert (after != NULL || sel_bb_empty_p (bb));
+  if (sel_bb_empty_p (bb))
+    {
+      gcc_assert (!sel_bb_empty_p (new_bb));
+
+      /* NEW_BB has data sets that need to be updated and BB holds
+	 data sets that should be removed.  Exchange these data sets
+	 so that we won't lose BB's valid data sets.  */
+      exchange_data_sets (new_bb, bb);
+
+      free_data_sets (bb);
+    }
 
   return new_bb;
 }
@@ -3675,7 +4087,7 @@ sel_split_edge (edge e)
     }
 
   /* Add all last_added_blocks to the region.  */
-  sel_add_or_remove_bb (NULL, true);
+  sel_add_or_remove_bb (NULL, 1);
 
   /* Now the CFG has been updated, and we can init data for the newly 
      created insns.  */
@@ -3685,6 +4097,59 @@ sel_split_edge (edge e)
   return new_bb;
 }
 
+/* Implement sched_create_empty_bb ().  */
+static basic_block
+sel_create_empty_bb (basic_block after)
+{
+  basic_block new_bb;
+
+  can_add_real_insns_p = false;
+  new_bb = sched_create_empty_bb_1 (after);
+  can_add_real_insns_p = true;
+
+  /* We'll explicitly initialize NEW_BB via sel_init_only_bb () a bit
+     later.  */
+  gcc_assert (VEC_length (basic_block, last_added_blocks) == 1
+	      && VEC_index (basic_block, last_added_blocks, 0) == new_bb);
+
+  VEC_free (basic_block, heap, last_added_blocks);
+
+  return new_bb;
+}
+
+/* Implement sched_create_recovery_block ().  */
+basic_block
+sel_create_recovery_block (insn_t orig_insn)
+{
+  basic_block first_bb;
+  basic_block second_bb;
+  basic_block recovery_block;
+
+  first_bb = BLOCK_FOR_INSN (orig_insn);
+  second_bb = sched_split_block (first_bb, orig_insn);
+
+  can_add_real_insns_p = false;
+  recovery_block = sched_create_recovery_block ();
+  can_add_real_insns_p = true;
+  gcc_assert (sel_bb_empty_p (recovery_block));
+
+  insn_init.what = INSN_INIT_WHAT_INSN;
+
+  sched_create_recovery_edges (first_bb, recovery_block, second_bb);
+
+  if (current_loops != NULL)
+    add_bb_to_loop (recovery_block, first_bb->loop_father);
+
+  sel_add_or_remove_bb (recovery_block, 1);
+
+  /* Now the CFG has been updated, and we can init data for the newly 
+     created insns.  */
+  insn_init.todo = (INSN_INIT_TODO_LUID | INSN_INIT_TODO_SIMPLEJUMP);
+  sel_init_new_insns ();
+
+  return recovery_block;
+}
+
 /* Merge basic block B into basic block A.  */
 void
 sel_merge_blocks (basic_block a, basic_block b)
@@ -3700,16 +4165,18 @@ sel_merge_blocks (basic_block a, basic_b
 /* A wrapper for redirect_edge_and_branch_force, which also initializes
    data structures for possibly created bb and insns.  Returns the newly
    added bb or NULL, when a bb was not needed.  */
-basic_block
-sel_redirect_edge_force (edge e, basic_block to)
+void
+sel_redirect_edge_and_branch_force (edge e, basic_block to)
 {
   basic_block jump_bb;
 
   gcc_assert (!sel_bb_empty_p (e->src));
 
+  insn_init.what = INSN_INIT_WHAT_INSN;
+
   jump_bb = redirect_edge_and_branch_force (e, to);
 
-  if (jump_bb)
+  if (jump_bb != NULL)
     sel_add_or_remove_bb (jump_bb, 1);
 
   /* This function could not be used to spoil the loop structure by now,
@@ -3721,12 +4188,10 @@ sel_redirect_edge_force (edge e, basic_b
      created insns.  */
   insn_init.todo = (INSN_INIT_TODO_LUID | INSN_INIT_TODO_SIMPLEJUMP);
   sel_init_new_insns ();
-
-  return jump_bb;
 }
 
 /* A wrapper for redirect_edge_and_branch.  */
-edge
+void
 sel_redirect_edge_and_branch (edge e, basic_block to)
 {
   edge ee;
@@ -3736,6 +4201,8 @@ sel_redirect_edge_and_branch (edge e, ba
                   && current_loop_nest
                   && e == loop_latch_edge (current_loop_nest));
 
+  insn_init.what = INSN_INIT_WHAT_INSN;
+
   ee = redirect_edge_and_branch (e, to);
 
   /* When we've redirected a latch edge, update the header.  */
@@ -3751,15 +4218,45 @@ sel_redirect_edge_and_branch (edge e, ba
      created insns.  */
   insn_init.todo = (INSN_INIT_TODO_LUID | INSN_INIT_TODO_SIMPLEJUMP);
   sel_init_new_insns ();
+}
+
+static struct cfg_hooks sel_cfg_hooks;
+
+/* Register sel-sched cfg hooks.  */
+void
+sel_register_cfg_hooks (void)
+{
+  sched_split_block = sel_split_block;
+
+  orig_cfg_hooks = get_cfg_hooks ();
+  sel_cfg_hooks = orig_cfg_hooks;
+
+  sel_cfg_hooks.create_basic_block = sel_create_basic_block;
+  sel_cfg_hooks.delete_basic_block = rtl_delete_block_not_barriers;
+
+  set_cfg_hooks (sel_cfg_hooks);
 
-  return ee;
+  sched_init_only_bb = sel_init_only_bb;
+  sched_split_block = sel_split_block;
+  sched_create_empty_bb = sel_create_empty_bb;
+}
+
+/* Unregister sel-sched cfg hooks.  */
+void
+sel_unregister_cfg_hooks (void)
+{
+  sched_create_empty_bb = NULL;
+  sched_split_block = NULL;
+  sched_init_only_bb = NULL;
+
+  set_cfg_hooks (orig_cfg_hooks);
 }
 
 
 
 /* Emit an insn rtx based on PATTERN.  */
 static rtx
-create_insn_rtx_from_pattern_1 (rtx pattern)
+create_insn_rtx_from_pattern_1 (rtx pattern, rtx label)
 {
   rtx insn_rtx;
 
@@ -3767,7 +4264,16 @@ create_insn_rtx_from_pattern_1 (rtx patt
 
   start_sequence ();
   insn_init.what = INSN_INIT_WHAT_INSN_RTX;
-  insn_rtx = emit_insn (pattern);
+
+  if (label == NULL_RTX)
+    insn_rtx = emit_insn (pattern);
+  else
+    {
+      insn_rtx = emit_jump_insn (pattern);
+      JUMP_LABEL (insn_rtx) = label;
+      ++LABEL_NUSES (label);
+    }
+
   end_sequence ();
 
   sched_init_luids (NULL, NULL, NULL, NULL);
@@ -3779,9 +4285,9 @@ create_insn_rtx_from_pattern_1 (rtx patt
 /* Emit an insn rtx based on PATTERN and ICE if the result is not a valid
    insn.  */
 rtx
-create_insn_rtx_from_pattern (rtx pattern)
+create_insn_rtx_from_pattern (rtx pattern, rtx label)
 {
-  rtx insn_rtx = create_insn_rtx_from_pattern_1 (pattern);
+  rtx insn_rtx = create_insn_rtx_from_pattern_1 (pattern, label);
 
   if (!insn_rtx_valid (insn_rtx))
     gcc_unreachable ();
@@ -3805,11 +4311,12 @@ create_copy_of_insn_rtx (rtx insn_rtx)
   bool orig_is_valid_p;
   rtx res;
 
-  gcc_assert (INSN_P (insn_rtx));
+  gcc_assert (NONJUMP_INSN_P (insn_rtx));
 
   orig_is_valid_p = insn_rtx_valid (insn_rtx);
 
-  res = create_insn_rtx_from_pattern_1 (copy_rtx (PATTERN (insn_rtx)));
+  res = create_insn_rtx_from_pattern_1 (copy_rtx (PATTERN (insn_rtx)),
+					NULL_RTX);
 
   if (insn_rtx_valid (res))
     gcc_assert (orig_is_valid_p);
@@ -3855,19 +4362,19 @@ static struct haifa_sched_info sched_sel
 void 
 setup_nop_and_exit_insns (void)
 {
-  if (nop_pattern == NULL_RTX)
-    nop_pattern = gen_nop ();
+  gcc_assert (nop_pattern == NULL_RTX
+	      && exit_insn == NULL_RTX);
 
-  if (exit_insn == NULL_RTX)
-    {
-      start_sequence ();
-      insn_init.what = INSN_INIT_WHAT_INSN_RTX;
-      emit_insn (nop_pattern);
-      exit_insn = get_insns ();
-      end_sequence ();
-    }
+  nop_pattern = gen_nop ();
 
-  set_block_for_insn (exit_insn, EXIT_BLOCK_PTR);
+  {
+    start_sequence ();
+    insn_init.what = INSN_INIT_WHAT_INSN_RTX;
+    emit_insn (nop_pattern);
+    exit_insn = get_insns ();
+    end_sequence ();
+    set_block_for_insn (exit_insn, EXIT_BLOCK_PTR);
+  }
 }
 
 /* Free special insns used in the scheduler.  */
@@ -3880,19 +4387,19 @@ free_nop_and_exit_insns (void)
 
 /* Setup a special vinsn used in new insns initialization.  */
 void
-setup_empty_vinsn (void)
+setup_nop_vinsn (void)
 {
-  empty_vinsn = vinsn_create (exit_insn, false);
-  vinsn_attach (empty_vinsn);
+  nop_vinsn = vinsn_create (exit_insn, false);
+  vinsn_attach (nop_vinsn);
 }
 
 /* Free a special vinsn used in new insns initialization.  */
 void
-free_empty_vinsn (void)
+free_nop_vinsn (void)
 {
-  gcc_assert (VINSN_COUNT (empty_vinsn) == 1);
-  vinsn_detach (empty_vinsn);
-  empty_vinsn = NULL;
+  gcc_assert (VINSN_COUNT (nop_vinsn) == 1);
+  vinsn_detach (nop_vinsn);
+  nop_vinsn = NULL;
 }
 
 /* Data structure to describe interaction with the generic scheduler utils.  */
@@ -4305,11 +4812,12 @@ sel_add_loop_preheader (void)
   VEC(basic_block, heap) *preheader_blocks 
     = LOOP_PREHEADER_BLOCKS (current_loop_nest);
 
-  for (i = 0; VEC_iterate (basic_block, 
-			   LOOP_PREHEADER_BLOCKS (current_loop_nest), i, bb); i++)
+  for (i = 0;
+       VEC_iterate (basic_block,
+		    LOOP_PREHEADER_BLOCKS (current_loop_nest), i, bb);
+       i++)
     {
-      
-      sel_add_or_remove_bb_1 (bb, true);
+      sel_add_or_remove_bb_1 (bb, 1);
       
       /* Set variables for the current region.  */
       rgn_setup_region (rgn);
@@ -4330,10 +4838,25 @@ sel_is_loop_preheader_p (basic_block bb)
       && current_loop_nest)
     {
       struct loop *outer;
-      /* BB is placed before the header, so, it is a preheader block.  */
+
+#if 0
+      /* BB is placed before the header, so, it is a preheader block.
+	 ??? CURRENT_LOOP_NEST->HEADER not necessarily belongs to the region,
+	 and hence BLOCK_TO_BB for it may be undefined.  */
       if (BLOCK_TO_BB (bb->index) 
-          < BLOCK_TO_BB (current_loop_nest->header->index))
-        return true;
+	  < BLOCK_TO_BB (current_loop_nest->header->index))
+	return true;
+#endif
+
+      /* Preheader is the first block in the region.  */
+      if (BLOCK_TO_BB (bb->index) == 0)
+	return true;
+
+      if (in_current_region_p (current_loop_nest->header))
+	/* Check that we don't miss any of the legitimate cases handled by
+	   the above '#if 0'-ed code.  */
+	gcc_assert (!(BLOCK_TO_BB (bb->index) 
+		      < BLOCK_TO_BB (current_loop_nest->header->index)));
 
       /* Support the situation when the latch block of outer loop
          could be from here.  */
@@ -4389,4 +4912,11 @@ sel_remove_loop_preheader (void)
     SET_LOOP_PREHEADER_BLOCKS (current_loop_nest->outer, preheader_blocks);
 }
 
+/* Return s_i_d entry of INSN.  Callable from debugger.  */
+sel_insn_data_def
+insn_sid (insn_t insn)
+{
+  return *SID (insn);
+}
+
 #endif
--- gcc-local/sel-sched-dev/gcc/sel-sched-ir.h	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sel-sched-ir.h	(revision 28697)
@@ -77,9 +77,6 @@ typedef _xlist_t ilist_t;
 #define ILIST_INSN(L) (_XLIST_X (L))
 #define ILIST_NEXT(L) (_XLIST_NEXT (L))
 
-/* Expression macros -- to be removed.  */
-#define expr_equal_p(A, B) (rtx_equal_p (A, B))
-
 /* Right hand side information.  */
 struct _expr
 {
@@ -454,7 +451,6 @@ _list_iter_cond_def (def_list_t def_list
 }
 
 
-
 /* InstructionData.  Contains information about insn pattern.  */
 struct idata_def
 {
@@ -561,8 +557,8 @@ struct _sel_insn_data
      field of this home VI.  */
   expr_def _expr;
 
-  int av_level;
-  av_set_t av;
+  /* If (WS_LEVEL == GLOBAL_LEVEL) then AV is empty.  */
+  int ws_level;
 
   int seqno;
 
@@ -592,6 +588,9 @@ struct _sel_insn_data
      required.
      This is used when emulating the Haifa scheduler for bundling.  */
   BOOL_BITFIELD after_stall_p : 1;
+
+  /* Speculations that are being checked by this insn.  */
+  ds_t spec_checked_ds;
 };
 
 typedef struct _sel_insn_data sel_insn_data_def;
@@ -604,6 +603,8 @@ extern VEC (sel_insn_data_def, heap) *s_
 /* Accessor macros for s_i_d.  */
 #define SID(INSN) (VEC_index (sel_insn_data_def, s_i_d,	INSN_LUID (INSN)))
 
+extern sel_insn_data_def insn_sid (insn_t);
+
 #define INSN_ASM_P(INSN) (SID (INSN)->asm_p)
 #define INSN_SCHED_NEXT(INSN) (SID (INSN)->sched_next)
 #define INSN_ANALYZED_DEPS(INSN) (SID (INSN)->analyzed_deps)
@@ -618,24 +619,23 @@ extern VEC (sel_insn_data_def, heap) *s_
 #define INSN_REG_SETS(INSN) (VINSN_REG_SETS (INSN_VINSN (INSN)))
 #define INSN_REG_USES(INSN) (VINSN_REG_USES (INSN_VINSN (INSN)))
 #define INSN_SCHED_TIMES(INSN) (EXPR_SCHED_TIMES (INSN_EXPR (INSN)))
-/* Obsolete.  */
-#define INSN_VI(INSN) (INSN_VINSN (INSN))
-
-#define INSN_AV_LEVEL(INSN) (SID (INSN)->av_level)
-#define INSN_AV_VALID_P(INSN) (INSN_AV_LEVEL (INSN) == global_level)
-/* Obsolete.  */
-#define AV_LEVEL(INSN) (INSN_AV_LEVEL (INSN))
-
-#define INSN_AV(INSN) (SID (INSN)->av)
-/* Obsolete.  */
-#define AV_SET(INSN) (INSN_AV (INSN))
-
 #define INSN_SEQNO(INSN) (SID (INSN)->seqno)
 #define INSN_AFTER_STALL_P(INSN) (SID (INSN)->after_stall_p)
 #define INSN_SCHED_CYCLE(INSN) (SID (INSN)->sched_cycle)
+#define INSN_SPEC_CHECKED_DS(INSN) (SID (INSN)->spec_checked_ds)
 
 /* A global level shows whether an insn is valid or not.  */
 extern int global_level;
+
+#define INSN_WS_LEVEL(INSN) (SID (INSN)->ws_level)
+
+extern av_set_t get_av_set (insn_t);
+extern int get_av_level (insn_t);
+
+#define AV_SET(INSN) (get_av_set (INSN))
+#define AV_LEVEL(INSN) (get_av_level (INSN))
+#define AV_SET_VALID_P(INSN) (AV_LEVEL (INSN) == global_level)
+
 /* A list of fences currently in the works.  */
 extern flist_t fences;
 
@@ -644,12 +644,6 @@ extern flist_t fences;
    LUID.  Except for these.  .  */
 struct _sel_insn_rtx_data
 {
-  /* For each bb header this field contains a set of live registers.
-     For all other insns this field has a NULL.
-     We also need to know LV sets for the instructions, that are immediatly
-     after the border of the region.  */
-  regset lv;
-
   /* Vinsn corresponding to this insn.
      We need this field to be accessible for every instruction - not only
      for those that have luids - because when choosing an instruction from
@@ -669,15 +663,14 @@ extern VEC (sel_insn_rtx_data_def, heap)
 #define SIRD(INSN) \
 (VEC_index (sel_insn_rtx_data_def, s_i_r_d, INSN_UID (INSN)))
 
-/* Access macro.  */
-#define LV_SET(INSN) (SIRD (INSN)->lv)
-/* !!! Replace all occurencies with (LV_SET (INSN) != NULL).  */
-#define LV_SET_VALID_P(INSN) (LV_SET (INSN) != NULL)
 #define GET_VINSN_BY_INSN(INSN) (SIRD (INSN)->get_vinsn_by_insn)
 
 extern void sel_extend_insn_rtx_data (void);
 extern void sel_finish_insn_rtx_data (void);
 
+extern void sel_register_rtl_hooks (void);
+extern void sel_unregister_rtl_hooks (void);
+
 /* A NOP pattern used as a placeholder for real insns.  */
 extern rtx nop_pattern;
 
@@ -711,8 +704,6 @@ extern rtx exit_insn;
 /* When false, only notes may be added.  */
 extern bool can_add_real_insns_p;
 
-extern const struct rtl_hooks sel_rtl_hooks;
-extern basic_block (*old_create_basic_block) (void *, void *, basic_block);
 
 
 
@@ -726,13 +717,9 @@ enum insn_init_what { INSN_INIT_WHAT_INS
 /* Initialize s_s_i_d.  */
 #define INSN_INIT_TODO_SSID (2)
 
-/* Initialize LV_SET and SEQNO for simplejump.  */
+/* Initialize data for simplejump.  */
 #define INSN_INIT_TODO_SIMPLEJUMP (4)
 
-/* Move LV_SET to the insn if it is being added to the bb header.  */
-#define INSN_INIT_TODO_MOVE_LV_SET_IF_BB_HEADER (8)
-
-
 /* A container to hold information about insn initialization.  */
 struct _insn_init
 {
@@ -757,31 +744,78 @@ enum _deps_where
 typedef enum _deps_where deps_where_t;
 
 
-/* Per basic block data.  */
-struct _sel_bb_info
+/* Per basic block data for the whole CFG.  */
+struct _sel_global_bb_info
+{
+  /* For each bb header this field contains a set of live registers.
+     For all other insns this field has a NULL.
+     We also need to know LV sets for the instructions, that are immediatly
+     after the border of the region.  */
+  regset lv_set;
+
+  /* Status of LV_SET.
+     true - block has usable LV_SET.
+     false - block's LV_SET should be recomputed.  */
+  bool lv_set_valid_p;
+};
+
+typedef struct _sel_global_bb_info sel_global_bb_info_def;
+typedef sel_global_bb_info_def *sel_global_bb_info_t;
+
+DEF_VEC_O (sel_global_bb_info_def);
+DEF_VEC_ALLOC_O (sel_global_bb_info_def, heap);
+
+/* Per basic block data.  This array is indexed by basic block index.  */
+extern VEC (sel_global_bb_info_def, heap) *sel_global_bb_info;
+
+extern void sel_extend_global_bb_info (void);
+extern void sel_finish_global_bb_info (void);
+
+/* Get data for BB.  */
+#define SEL_GLOBAL_BB_INFO(BB)					\
+  (VEC_index (sel_global_bb_info_def, sel_global_bb_info, (BB)->index))
+
+/* Access macros.  */
+#define BB_LV_SET(BB) (SEL_GLOBAL_BB_INFO (BB)->lv_set)
+#define BB_LV_SET_VALID_P(BB) (SEL_GLOBAL_BB_INFO (BB)->lv_set_valid_p)
+
+/* Per basic block data for the region.  */
+struct _sel_region_bb_info
 {
   /* This insn stream is constructed in such a way that it should be
      traversed by PREV_INSN field - (*not* NEXT_INSN).  */
   rtx note_list;
+
+  /* Cached availability set at the beginning of a block.
+     See also AV_LEVEL () for conditions when this av_set can be used.  */
+  av_set_t av_set;
+
+  /* If (AV_LEVEL == GLOBAL_LEVEL) then AV is valid.  */
+  int av_level;
 };
 
-typedef struct _sel_bb_info sel_bb_info_def;
-typedef sel_bb_info_def *sel_bb_info_t;
+typedef struct _sel_region_bb_info sel_region_bb_info_def;
+typedef sel_region_bb_info_def *sel_region_bb_info_t;
 
-DEF_VEC_O (sel_bb_info_def);
-DEF_VEC_ALLOC_O (sel_bb_info_def, heap);
+DEF_VEC_O (sel_region_bb_info_def);
+DEF_VEC_ALLOC_O (sel_region_bb_info_def, heap);
 
 /* Per basic block data.  This array is indexed by basic block index.  */
-extern VEC (sel_bb_info_def, heap) *sel_bb_info;
+extern VEC (sel_region_bb_info_def, heap) *sel_region_bb_info;
 
 /* Get data for BB.  */
-#define SEL_BB_INFO(BB) (VEC_index (sel_bb_info_def, sel_bb_info, (BB)->index))
+#define SEL_REGION_BB_INFO(BB) (VEC_index (sel_region_bb_info_def,	\
+					   sel_region_bb_info, (BB)->index))
 
 /* Get BB's note_list.
    A note_list is a list of various notes that was scattered across BB
    before scheduling, and will be appended at the beginning of BB after
    scheduling is finished.  */
-#define BB_NOTE_LIST(BB) (SEL_BB_INFO (BB)->note_list)
+#define BB_NOTE_LIST(BB) (SEL_REGION_BB_INFO (BB)->note_list)
+
+#define BB_AV_SET(BB) (SEL_REGION_BB_INFO (BB)->av_set)
+#define BB_AV_LEVEL(BB) (SEL_REGION_BB_INFO (BB)->av_level)
+#define BB_AV_SET_VALID_P(BB) (BB_AV_LEVEL (BB) == global_level)
 
 /* Used in bb_in_ebb_p.  */
 extern bitmap_head *forced_ebb_heads;
@@ -823,7 +857,7 @@ extern void reset_target_context (tc_t, 
 extern void advance_deps_context (deps_t, insn_t);
 
 /* Fences functions.  */
-extern void init_fences (basic_block);
+extern void init_fences (insn_t);
 extern void new_fences_add (flist_tail_t, insn_t, state_t, deps_t, void *, rtx,
                        rtx, int, int, bool, bool);
 extern void new_fences_add_clean (flist_tail_t, insn_t, fence_t);
@@ -845,6 +879,7 @@ extern bool vinsn_cond_branch_p (vinsn_t
 extern void recompute_vinsn_lhs_rhs (vinsn_t);
 extern int sel_vinsn_cost (vinsn_t);
 extern insn_t sel_gen_insn_from_rtx_after (rtx, expr_t, int, insn_t);
+extern insn_t sel_gen_recovery_insn_from_rtx_after (rtx, expr_t, int, insn_t);
 extern insn_t sel_gen_insn_from_expr_after (expr_t, int, insn_t);
 
 /* RHS functions.  */
@@ -892,21 +927,20 @@ extern void sel_finish_new_insns (void);
 extern bool bookkeeping_can_be_created_if_moved_through_p (insn_t);
 extern insn_t copy_insn_out_of_stream (vinsn_t);
 extern insn_t copy_insn_and_insert_before (insn_t, insn_t);
-extern void sched_sel_remove_insn (insn_t);
-extern void transfer_data_sets (insn_t, insn_t);
+extern void sel_remove_insn (insn_t);
 extern int vinsn_dfa_cost (vinsn_t, fence_t);
 extern bool bb_header_p (insn_t);
-
+extern void sel_init_invalid_data_sets (insn_t);
 
 /* Basic block and CFG functions.  */
 
-extern insn_t sel_bb_header (basic_block);
-extern bool sel_bb_header_p (insn_t);
-extern bool sel_bb_empty_p_1 (basic_block, bool);
-extern bool sel_bb_empty_p (basic_block);
+extern insn_t sel_bb_head (basic_block);
+extern bool sel_bb_head_p (insn_t);
 extern insn_t sel_bb_end (basic_block);
 extern bool sel_bb_end_p (insn_t);
 
+extern bool sel_bb_empty_p (basic_block);
+
 extern bool in_current_region_p (basic_block);
 
 extern void sel_init_bbs (bb_vec_t, basic_block);
@@ -918,23 +952,22 @@ extern void cfg_succs_1 (insn_t, int, in
 extern void cfg_succs (insn_t, insn_t **, int *);
 extern insn_t cfg_succ_1 (insn_t, int);
 extern insn_t cfg_succ (insn_t);
-extern bool num_preds_gt_1 (insn_t);
+extern bool sel_num_cfg_preds_gt_1 (insn_t);
 
 extern bool is_ineligible_successor (insn_t, ilist_t);
 
 extern bool bb_ends_ebb_p (basic_block);
 extern bool in_same_ebb_p (insn_t, insn_t);
 
-extern basic_block sel_create_basic_block (void *, void *, basic_block);
+extern void free_bb_note_pool (void);
 
-extern void sel_add_or_remove_bb (basic_block, int);
 extern basic_block sel_create_basic_block_before (basic_block);
 extern void sel_remove_empty_bb (basic_block, bool, bool);
-extern basic_block sel_split_block (basic_block, insn_t);
 extern basic_block sel_split_edge (edge);
+extern basic_block sel_create_recovery_block (insn_t);
 extern void sel_merge_blocks (basic_block, basic_block);
-extern basic_block sel_redirect_edge_force (edge, basic_block);
-extern edge sel_redirect_edge_and_branch (edge, basic_block);
+extern void sel_redirect_edge_and_branch (edge, basic_block);
+extern void sel_redirect_edge_and_branch_force (edge, basic_block);
 extern void pipeline_outer_loops (void);
 extern void pipeline_outer_loops_init (void);
 extern void pipeline_outer_loops_finish (void);
@@ -947,8 +980,11 @@ extern void sel_add_loop_preheader (void
 extern bool sel_is_loop_preheader_p (basic_block);
 extern void clear_outdated_rtx_info (basic_block);
 
+extern void sel_register_cfg_hooks (void);
+extern void sel_unregister_cfg_hooks (void);
+
 /* Expression transformation routines.  */
-extern rtx create_insn_rtx_from_pattern (rtx);
+extern rtx create_insn_rtx_from_pattern (rtx, rtx);
 extern vinsn_t create_vinsn_from_insn_rtx (rtx);
 extern rtx create_copy_of_insn_rtx (rtx);
 extern void change_vinsn_in_expr (expr_t, vinsn_t);
@@ -958,8 +994,8 @@ extern void init_lv_sets (void);
 extern void free_lv_sets (void);
 extern void setup_nop_and_exit_insns (void);
 extern void free_nop_and_exit_insns (void);
-extern void setup_empty_vinsn (void);
-extern void free_empty_vinsn (void);
+extern void setup_nop_vinsn (void);
+extern void free_nop_vinsn (void);
 extern void sel_setup_common_sched_info (void);
 extern void sel_setup_sched_infos (void);
 
@@ -1059,10 +1095,23 @@ get_all_loop_exits (basic_block bb)
   /* And now check whether we should skip over inner loop.  */
   if (inner_loop_header_p (bb))
     {
-      struct loop *this_loop = bb->loop_father;
+      struct loop *this_loop;
       int i;
       edge e;
 
+      {
+	struct loop *pred_loop = NULL;
+
+	for (this_loop = bb->loop_father;
+	     this_loop && this_loop != current_loop_nest;
+	     this_loop = this_loop->outer)
+	  pred_loop = this_loop;
+
+	this_loop = pred_loop;
+
+	gcc_assert (this_loop != NULL);
+      }
+
       exits = get_loop_exit_edges_unique_dests (this_loop);
 
       /* Traverse all loop headers.  */
@@ -1089,6 +1138,10 @@ get_all_loop_exits (basic_block bb)
 		continue;
 	      }
 	  }
+	else
+	  {
+	    gcc_assert (!inner_loop_header_p (e->dest));
+	  }
     }
 
   return exits;
@@ -1222,8 +1275,8 @@ _succ_iter_cond (succ_iterator *ip, rtx 
               ei_next (&(ip->ei));
             }
 
-          /* If loop_exits are non null, we have found an inner loop; do one more iteration 
-             to fetch an edge from these exits.  */
+          /* If loop_exits are non null, we have found an inner loop;
+	     do one more iteration to fetch an edge from these exits.  */
           if (ip->loop_exits)
             continue;
 
@@ -1239,10 +1292,12 @@ _succ_iter_cond (succ_iterator *ip, rtx 
 	    *succp = exit_insn;
 	  else
 	    {
-              *succp = next_nonnote_insn (bb_note (bb));
-              
+              *succp = sel_bb_head (bb);
+
               gcc_assert (ip->flags != SUCCS_NORMAL
                           || *succp == NEXT_INSN (bb_note (bb)));
+
+	      gcc_assert (BLOCK_FOR_INSN (*succp) == bb);
 	    }
 
 	  return true;
@@ -1289,7 +1344,7 @@ _eligible_successor_edge_p (edge e1, bas
   /* Skip empty blocks, but be careful not to leave the region.  */
   while (1)
     {
-      if (!sel_bb_empty_p_1 (bb, false))
+      if (!sel_bb_empty_p (bb))
         break;
         
       if (!in_current_region_p (bb) 
@@ -1308,8 +1363,8 @@ _eligible_successor_edge_p (edge e1, bas
       /* BLOCK_TO_BB sets topological order of the region here.  
          It is important to use REAL_PRED here as we may well have 
          e1->src outside current region, when skipping to loop exits.  */
-      bool succeeds_in_top_order 
-        = BLOCK_TO_BB (real_pred->index) < BLOCK_TO_BB (bb->index);
+      bool succeeds_in_top_order = (BLOCK_TO_BB (real_pred->index)
+				    < BLOCK_TO_BB (bb->index));
 
       /* We are advancing forward in the region, as usual.  */
       if (succeeds_in_top_order)
--- gcc-local/sel-sched-dev/gcc/sel-sched-dump.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sel-sched-dump.c	(revision 28697)
@@ -312,7 +312,8 @@ dump_vinsn_1 (vinsn_t vi, int flags)
   line_finish ();
 }
 
-static int dump_vinsn_flags = DUMP_VINSN_INSN_RTX | DUMP_VINSN_TYPE;
+static int dump_vinsn_flags = (DUMP_VINSN_INSN_RTX | DUMP_VINSN_TYPE
+			       | DUMP_VINSN_COUNT);
 
 void
 dump_vinsn (vinsn_t vi)
@@ -763,9 +764,11 @@ sel_dump_cfg_insn (insn_t insn, int flag
 {
   int insn_flags = DUMP_INSN_UID | DUMP_INSN_PATTERN;
 
-  if ((flags & SEL_DUMP_CFG_INSN_SEQNO)
-      && INSN_LUID (insn) > 0)
-    insn_flags |= DUMP_INSN_SEQNO | DUMP_INSN_SCHED_CYCLE | DUMP_INSN_EXPR;
+  if (sched_luids != NULL && INSN_LUID (insn) > 0)
+    {
+      if (flags & SEL_DUMP_CFG_INSN_SEQNO)
+	insn_flags |= DUMP_INSN_SEQNO | DUMP_INSN_SCHED_CYCLE | DUMP_INSN_EXPR;
+    }
 
   dump_insn_1 (insn, insn_flags);
 }
@@ -905,6 +908,10 @@ sel_dump_cfg_2 (FILE *f, int flags)
       fprintf (f, "\tbb%d [%s%slabel = \"{Basic block %d", bb->index,
 	       style, color, bb->index);
 
+      if ((flags & SEL_DUMP_CFG_BB_LOOP)
+	  && bb->loop_father != NULL)
+	fprintf (f, ", loop %d", bb->loop_father->num);
+
       if (full_p
 	  && (flags & SEL_DUMP_CFG_BB_NOTES_LIST))
 	{
@@ -931,33 +938,23 @@ sel_dump_cfg_2 (FILE *f, int flags)
 	  && in_current_region_p (bb)
 	  && !sel_bb_empty_p (bb))
 	{
-	  insn_t head = NEXT_INSN (bb_note (bb));
-
 	  fprintf (f, "|");
 
-	  if (INSN_AV_VALID_P (head))
-	    dump_av_set (AV_SET (head));
-	  else
-	    {
-	      fprintf (f, "!!! Wrong AV_SET%s",
-		       (AV_LEVEL (head) == -1) ? ": but ok" : "");
-	    }
+	  if (BB_AV_SET_VALID_P (bb))
+	    dump_av_set (BB_AV_SET (bb));
+	  else if (BB_AV_LEVEL (bb) == -1)
+	    fprintf (f, "AV_SET needs update");
 	}
 
       if ((flags & SEL_DUMP_CFG_LV_SET)
 	  && !sel_bb_empty_p (bb))
-	{
-	  insn_t head;
-	  insn_t tail;
-
-	  get_ebb_head_tail (bb, bb, &head, &tail);
-
+ 	{
 	  fprintf (f, "|");
 
-	  if (INSN_P (head) && LV_SET_VALID_P (head))
-	    dump_lv_set (LV_SET (head));
+	  if (BB_LV_SET_VALID_P (bb))
+	    dump_lv_set (BB_LV_SET (bb));
 	  else
-	    fprintf (f, "!!! Wrong LV_SET");
+	    fprintf (f, "LV_SET needs update");
 	}
 
       if (flags & SEL_DUMP_CFG_BB_LIVE)
--- gcc-local/sel-sched-dev/gcc/sel-sched-dump.h	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sel-sched-dump.h	(revision 28697)
@@ -37,6 +37,7 @@ Software Foundation, 51 Franklin Street,
 #define SEL_DUMP_CFG_INSN_FLAGS (0)
 #define SEL_DUMP_CFG_FUNCTION_NAME (256)
 #define SEL_DUMP_CFG_BB_LIVE (512)
+#define SEL_DUMP_CFG_BB_LOOP (1024)
 /* The default flags for cfg dumping.  */
 #define SEL_DUMP_CFG_FLAGS (SEL_DUMP_CFG_CURRENT_REGION \
 			    | SEL_DUMP_CFG_BB_NOTES_LIST \
@@ -45,7 +46,8 @@ Software Foundation, 51 Franklin Street,
 			    | SEL_DUMP_CFG_BB_INSNS \
                             | SEL_DUMP_CFG_FENCES \
                             | SEL_DUMP_CFG_INSN_SEQNO \
-                            | SEL_DUMP_CFG_INSN_FLAGS)
+                            | SEL_DUMP_CFG_INSN_FLAGS \
+			    | SEL_DUMP_CFG_BB_LOOP)
 
 enum _dump_insn_rtx
   {
--- gcc-local/sel-sched-dev/gcc/sched-deps.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sched-deps.c	(revision 28697)
@@ -2594,6 +2594,12 @@ ds_max_merge (ds_t ds1, ds_t ds2)
   if (ds1 == 0 && ds2 == 0)
     return 0;
 
+  if (ds1 == 0 && ds2 != 0)
+    return ds2;
+
+  if (ds1 != 0 && ds2 == 0)
+    return ds1;
+
   return ds_merge_1 (ds1, ds2, true);
 }
 
--- gcc-local/sel-sched-dev/gcc/sched-int.h	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sched-int.h	(revision 28697)
@@ -77,8 +77,6 @@ extern void sched_extend_target (void);
 extern void haifa_init_h_i_d (bb_vec_t, basic_block, insn_vec_t, rtx);
 extern void haifa_finish_h_i_d (void);
 
-extern void haifa_init_only_bb (basic_block, basic_block);
-
 /* Hooks that are common to all the schedulers.  */
 struct common_sched_info_def
 {
@@ -156,6 +154,8 @@ extern VEC (int, heap) *sched_luids;
 /* The highest INSN_LUID.  */
 extern int sched_max_luid;
 
+extern int insn_luid (rtx);
+
 /* Return true if NOTE is a note but not a basic block one.  */
 #define NOTE_NOT_BB_P(NOTE) (NOTE_P (NOTE) && (NOTE_LINE_NUMBER (NOTE)	\
 					       != NOTE_INSN_BASIC_BLOCK))
@@ -293,6 +293,17 @@ extern int max_issue (struct ready_list 
 extern void ebb_compute_jump_reg_dependencies (rtx, regset, regset, regset);
 
 extern edge find_fallthru_edge (basic_block);
+
+extern void (* sched_init_only_bb) (basic_block, basic_block);
+extern basic_block (* sched_split_block) (basic_block, rtx);
+extern basic_block sched_split_block_1 (basic_block, rtx);
+extern basic_block (* sched_create_empty_bb) (basic_block);
+extern basic_block sched_create_empty_bb_1 (basic_block);
+
+extern basic_block sched_create_recovery_block (void);
+extern void sched_create_recovery_edges (basic_block, basic_block,
+					 basic_block);
+
 extern void dump_insn_slim_1 (FILE *, rtx);
 
 /* Pointer to data describing the current DFA state.  */
--- gcc-local/sel-sched-dev/gcc/sched-rgn.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sched-rgn.c	(revision 28697)
@@ -3379,43 +3379,50 @@ extend_regions (void)
   containing_rgn = XRESIZEVEC (int, containing_rgn, last_basic_block);
 }
 
-/* BB was added to ebb after AFTER.  */
-static void
-rgn_add_block (basic_block bb, basic_block after)
+void
+rgn_make_new_region_out_of_new_block (basic_block bb)
 {
-  extend_regions ();
-
-  if (after == 0 || after == EXIT_BLOCK_PTR)
-    {
-      int i;
+  int i;
       
-      i = RGN_BLOCKS (nr_regions);
-      /* I - first free position in rgn_bb_table.  */
+  i = RGN_BLOCKS (nr_regions);
+  /* I - first free position in rgn_bb_table.  */
 
-      rgn_bb_table[i] = bb->index;
-      RGN_NR_BLOCKS (nr_regions) = 1;
-      RGN_DONT_CALC_DEPS (nr_regions) = after == EXIT_BLOCK_PTR;
-      RGN_HAS_REAL_EBB (nr_regions) = 0;
-      RGN_HAS_RENAMING_P (nr_regions) = 0;
-      RGN_WAS_PIPELINED_P (nr_regions) = 0;
-      RGN_NEEDS_GLOBAL_LIVE_UPDATE (nr_regions) = 0;
-      CONTAINING_RGN (bb->index) = nr_regions;
-      BLOCK_TO_BB (bb->index) = 0;
+  rgn_bb_table[i] = bb->index;
+  RGN_NR_BLOCKS (nr_regions) = 1;
+  RGN_HAS_REAL_EBB (nr_regions) = 0;
+  RGN_DONT_CALC_DEPS (nr_regions) = 0;
+  RGN_HAS_RENAMING_P (nr_regions) = 0;
+  RGN_WAS_PIPELINED_P (nr_regions) = 0;
+  RGN_NEEDS_GLOBAL_LIVE_UPDATE (nr_regions) = 0;
+  CONTAINING_RGN (bb->index) = nr_regions;
+  BLOCK_TO_BB (bb->index) = 0;
 
-      nr_regions++;
+  nr_regions++;
       
-      RGN_BLOCKS (nr_regions) = i + 1;
+  RGN_BLOCKS (nr_regions) = i + 1;
 
-      if (CHECK_DEAD_NOTES)
-        {
-          sbitmap blocks = sbitmap_alloc (last_basic_block);
-          deaths_in_region = xrealloc (deaths_in_region, nr_regions *
-				       sizeof (*deaths_in_region));
+  if (CHECK_DEAD_NOTES)
+    {
+      sbitmap blocks = sbitmap_alloc (last_basic_block);
+      deaths_in_region = xrealloc (deaths_in_region, nr_regions *
+				   sizeof (*deaths_in_region));
 
-          check_dead_notes1 (nr_regions - 1, blocks);
+      check_dead_notes1 (nr_regions - 1, blocks);
       
-          sbitmap_free (blocks);
-        }
+      sbitmap_free (blocks);
+    }
+}
+
+/* BB was added to ebb after AFTER.  */
+static void
+rgn_add_block (basic_block bb, basic_block after)
+{
+  extend_regions ();
+
+  if (after == 0 || after == EXIT_BLOCK_PTR)
+    {
+      rgn_make_new_region_out_of_new_block (bb);
+      RGN_DONT_CALC_DEPS (nr_regions - 1) = (after == EXIT_BLOCK_PTR);
     }
   else
     { 
--- gcc-local/sel-sched-dev/gcc/sched-rgn.h	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/sched-rgn.h	(revision 28697)
@@ -77,6 +77,7 @@ extern void sched_rgn_local_init (int);
 extern void sched_rgn_local_finish (void);
 extern void sched_rgn_local_free (void);
 extern void extend_regions (void);
+extern void rgn_make_new_region_out_of_new_block (basic_block);
 
 extern void compute_trg_info (int);
 extern void free_trg_info (void);
--- gcc-local/sel-sched-dev/gcc/config/ia64/ia64.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/config/ia64/ia64.c	(revision 28697)
@@ -7255,6 +7255,13 @@ ia64_gen_spec_load (rtx insn, ds_t ts, i
   return new_pat;
 }
 
+static bool
+insn_can_be_in_speculative_p (rtx insn ATTRIBUTE_UNUSED,
+			      ds_t ds ATTRIBUTE_UNUSED)
+{
+  return false;
+}
+
 /* Implement targetm.sched.speculate_insn hook.
    Check if the INSN can be TS speculative.
    If 'no' - return -1.
@@ -7267,11 +7274,15 @@ ia64_speculate_insn (rtx insn, ds_t ts, 
   int mode_no;
   int res;
   
-  gcc_assert (!(ts & ~BEGIN_SPEC));
+  gcc_assert (!(ts & ~SPECULATIVE));
 
   if (ia64_spec_check_p (insn))
     return -1;
 
+  if ((ts & BE_IN_SPEC)
+      && !insn_can_be_in_speculative_p (insn, ts))
+    return -1;
+
   mode_no = get_mode_no_for_insn (insn);
 
   if (mode_no != SPEC_MODE_INVALID)
@@ -7437,8 +7448,7 @@ ia64_needs_block_p (ds_t ts)
 
   gcc_assert ((ts & BEGIN_CONTROL) != 0);
 
-  return (!((mflag_sched_spec_control_ldc && mflag_sched_spec_ldc)
-	    || SEL_SCHED_P));
+  return !(mflag_sched_spec_control_ldc && mflag_sched_spec_ldc);
 }
 
 /* Generate (or regenerate, if (MUTATE_P)) recovery check for INSN.
--- gcc-local/sel-sched-dev/gcc/cfgrtl.c	(revision 28696)
+++ gcc-local/sel-sched-dev/gcc/cfgrtl.c	(revision 28697)
@@ -247,6 +247,7 @@ basic_block
 create_basic_block_structure (rtx head, rtx end, rtx bb_note, basic_block after)
 {
   basic_block bb;
+  int bb_index;
 
   if (bb_note
       && (bb = NOTE_BASIC_BLOCK (bb_note)) != NULL
@@ -266,6 +267,10 @@ create_basic_block_structure (rtx head, 
 
       if (after != bb_note && NEXT_INSN (after) != bb_note)
 	reorder_insns_nobb (bb_note, bb_note, after);
+
+      /* ??? It would be better to set bb_index to BLOCK_NUM (bb_note),
+	 but that causes add_to_dominance_info () to fail.  */
+      bb_index = last_basic_block++;
     }
   else
     {
@@ -292,6 +297,8 @@ create_basic_block_structure (rtx head, 
 	}
 
       NOTE_BASIC_BLOCK (bb_note) = bb;
+
+      bb_index = last_basic_block++;
     }
 
   /* Always include the bb note in the block.  */
@@ -300,7 +307,7 @@ create_basic_block_structure (rtx head, 
 
   BB_HEAD (bb) = head;
   BB_END (bb) = end;
-  bb->index = last_basic_block++;
+  bb->index = bb_index;
   bb->flags = BB_NEW | BB_RTL;
   link_block (bb, after);
   SET_BASIC_BLOCK (bb->index, bb);

Property changes on: gcc-local/sel-sched-dev
___________________________________________________________________
Name: svk:merge
  23c3ee16-a423-49b3-8738-b114dc1aabb6:/local/gcc-trunk:531
  41d2e0e8-8285-4a91-821f-3a5385f608dd:/gcc-local/beforeload:26052
  41d2e0e8-8285-4a91-821f-3a5385f608dd:/gcc-local/cache-deps:28683
 +41d2e0e8-8285-4a91-821f-3a5385f608dd:/gcc-local/old/sel-lds:27682
  41d2e0e8-8285-4a91-821f-3a5385f608dd:/gcc-local/powerpc-fixes:25348
 +41d2e0e8-8285-4a91-821f-3a5385f608dd:/gcc-local/sel-chk:28693
  41d2e0e8-8285-4a91-821f-3a5385f608dd:/gcc-local/sel-lds:27679
  41d2e0e8-8285-4a91-821f-3a5385f608dd:/gcc-local/sel-sched:27173
  41d2e0e8-8285-4a91-821f-3a5385f608dd:/gcc-local/sel-sched-dev-merge-30941:26076


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]