This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PR debug/41535] reset debug insns affected by scheduling


The logic introduced in the scheduler to keep debug insns at about the
right place (as early as possible given deps on operands, on the prior
debug insn and on the prior nondebug insn), while preventing them from
affecting the scheduling of nondebug insns (avoiding any dependencies of
nondebug insns on debug insns), ended up enabling nondebug insns that
set registers or memory locations to be moved before debug insns that
expected their prior values.  Oops.

This patch enables nondebug insns to âdependâ on debug insns, but these
dependencies are handled in a special way: they don't stop the nondebug
insn from being scheduled.  Rather, they enable the nondebug insn, once
scheduled, to quickly reset debug insns that remain as unresolved
dependencies.

To avoid using up more memory and complicating scheduler logic, I didn't
add more dependency lists.  Instead, I've arranged for âdebug depsâ,
i.e., those in which a nondebug insn depends on a debug insn, to be
added to the dependency lists after all nondebug deps.  This enables us
to quickly verify that the dependency lists are âemptyâ (save for debug
deps).

As expected, this situation is quite uncommon.  There are no more than a
few hundred hits building stage2 of GCC with all languages enabled.
Thus, although attempting to preserve the debug information at hand
might be possible, I decided it wasn't worth it, and coded the scheduler
to just drop it on the floor.

This is the current implementation.  I'm a bit undecided as to whether
to implement the insertion of debug deps in dep lists in this
simple-minded way, which may exhibit O(n^2) behavior if long lists of
debug deps arise, or to compute all deps without regard to such
ordering, and then reorder them at the end (won't work with incremental
computation of dependencies, as in sel-sched IIRC), or when computing
list lengths, or perhaps even discounting them in the list length
counter.

I'm not sufficiently proficient in the schedulers to tell which is best
without trying them all.  Any advice from sched experts?


I'm still testing this patch, but if it looks reasonable, is it ok to
install if it passes regstrap?

for  gcc/ChangeLog
from  Alexandre Oliva  <aoliva@redhat.com>

	PR debug/41535
	* sched-deps.c (depl_on_debug_p): New.
	(attach_dep_link): Reject debug deps before nondebug deps.
	(add_to_deps_list): Insert debug deps after nondebug deps.
	(sd_lists_empty_p): Stop at first nonempty list.  Disregard debug
	deps.
	(sd_add_dep): Do not reject debug deps.
	(add_insn_mem_dependence): Don't count debug deps.
	(remove_from_deps): Likewise.
	(sched_analyze_2): Set up mem deps on debug insns.
	(sched_analyze_insn): Record reg uses for deps on debug insns.
	* haifa-sched.c (schedule_insn): Reset deferred debug insn.  Don't
	try_ready nondebug insn after debug insn.

Index: gcc/sched-deps.c
===================================================================
--- gcc/sched-deps.c.orig	2009-10-15 03:32:16.000000000 -0300
+++ gcc/sched-deps.c	2009-10-15 03:35:27.000000000 -0300
@@ -211,6 +211,16 @@ sd_debug_dep (dep_t dep)
   fprintf (stderr, "\n");
 }
 
+/* Determine whether DEP is a dependency link of a non-debug insn on a
+   debug insn.  */
+
+static inline bool
+depl_on_debug_p (dep_link_t dep)
+{
+  return (DEBUG_INSN_P (DEP_LINK_PRO (dep))
+	  && !DEBUG_INSN_P (DEP_LINK_CON (dep)));
+}
+
 /* Functions to operate with a single link from the dependencies lists -
    dep_link_t.  */
 
@@ -233,6 +243,8 @@ attach_dep_link (dep_link_t l, dep_link_
     {
       gcc_assert (DEP_LINK_PREV_NEXTP (next) == prev_nextp);
 
+      gcc_assert (!depl_on_debug_p (l) || depl_on_debug_p (next));
+
       DEP_LINK_PREV_NEXTP (next) = &DEP_LINK_NEXT (l);
     }
 
@@ -244,7 +256,14 @@ attach_dep_link (dep_link_t l, dep_link_
 static void
 add_to_deps_list (dep_link_t link, deps_list_t l)
 {
-  attach_dep_link (link, &DEPS_LIST_FIRST (l));
+  dep_link_t *nextp = &DEPS_LIST_FIRST (l);
+
+  /* Keep debug deps after other kinds of deps.  */
+  if (MAY_HAVE_DEBUG_INSNS && depl_on_debug_p (link))
+    while (*nextp && !depl_on_debug_p (*nextp))
+      nextp = &DEP_LINK_NEXT (*nextp);
+
+  attach_dep_link (link, nextp);
 
   ++DEPS_LIST_N_LINKS (l);
 }
@@ -668,10 +687,22 @@ sd_lists_size (const_rtx insn, sd_list_t
 }
 
 /* Return true if INSN's lists defined by LIST_TYPES are all empty.  */
+
 bool
 sd_lists_empty_p (const_rtx insn, sd_list_types_def list_types)
 {
-  return sd_lists_size (insn, list_types) == 0;
+  while (list_types != SD_LIST_NONE)
+    {
+      deps_list_t list;
+      bool resolved_p;
+
+      sd_next_list (insn, &list_types, &list, &resolved_p);
+      if (list && DEPS_LIST_N_LINKS (list)
+	  && !depl_on_debug_p (DEPS_LIST_FIRST (list)))
+	return false;
+    }
+
+  return true;
 }
 
 /* Initialize data for INSN.  */
@@ -1201,7 +1232,6 @@ sd_add_dep (dep_t dep, bool resolved_p)
   rtx insn = DEP_CON (dep);
 
   gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
-  gcc_assert (!DEBUG_INSN_P (elem) || DEBUG_INSN_P (insn));
 
   if ((current_sched_info->flags & DO_SPECULATION)
       && !sched_insn_is_legitimate_for_speculation_p (insn, DEP_STATUS (dep)))
@@ -1528,7 +1558,8 @@ add_insn_mem_dependence (struct deps *de
     {
       insn_list = &deps->pending_read_insns;
       mem_list = &deps->pending_read_mems;
-      deps->pending_read_list_length++;
+      if (!DEBUG_INSN_P (insn))
+	deps->pending_read_list_length++;
     }
   else
     {
@@ -2408,63 +2439,63 @@ sched_analyze_2 (struct deps *deps, rtx 
 	rtx pending, pending_mem;
 	rtx t = x;
 
-	if (DEBUG_INSN_P (insn))
-	  {
-	    sched_analyze_2 (deps, XEXP (x, 0), insn);
-	    return;
-	  }
-
 	if (sched_deps_info->use_cselib)
 	  {
 	    t = shallow_copy_rtx (t);
 	    cselib_lookup (XEXP (t, 0), Pmode, 1);
 	    XEXP (t, 0) = cselib_subst_to_values (XEXP (t, 0));
 	  }
-	t = canon_rtx (t);
-	pending = deps->pending_read_insns;
-	pending_mem = deps->pending_read_mems;
-	while (pending)
+
+	if (!DEBUG_INSN_P (insn))
 	  {
-	    if (read_dependence (XEXP (pending_mem, 0), t)
-		&& ! sched_insns_conditions_mutex_p (insn, XEXP (pending, 0)))
-	      note_mem_dep (t, XEXP (pending_mem, 0), XEXP (pending, 0),
-			    DEP_ANTI);
+	    t = canon_rtx (t);
+	    pending = deps->pending_read_insns;
+	    pending_mem = deps->pending_read_mems;
+	    while (pending)
+	      {
+		if (read_dependence (XEXP (pending_mem, 0), t)
+		    && ! sched_insns_conditions_mutex_p (insn,
+							 XEXP (pending, 0)))
+		  note_mem_dep (t, XEXP (pending_mem, 0), XEXP (pending, 0),
+				DEP_ANTI);
 
-	    pending = XEXP (pending, 1);
-	    pending_mem = XEXP (pending_mem, 1);
-	  }
+		pending = XEXP (pending, 1);
+		pending_mem = XEXP (pending_mem, 1);
+	      }
 
-	pending = deps->pending_write_insns;
-	pending_mem = deps->pending_write_mems;
-	while (pending)
-	  {
-	    if (true_dependence (XEXP (pending_mem, 0), VOIDmode,
-				 t, rtx_varies_p)
-		&& ! sched_insns_conditions_mutex_p (insn, XEXP (pending, 0)))
-	      note_mem_dep (t, XEXP (pending_mem, 0), XEXP (pending, 0),
-			    sched_deps_info->generate_spec_deps
-			    ? BEGIN_DATA | DEP_TRUE : DEP_TRUE);
+	    pending = deps->pending_write_insns;
+	    pending_mem = deps->pending_write_mems;
+	    while (pending)
+	      {
+		if (true_dependence (XEXP (pending_mem, 0), VOIDmode,
+				     t, rtx_varies_p)
+		    && ! sched_insns_conditions_mutex_p (insn,
+							 XEXP (pending, 0)))
+		  note_mem_dep (t, XEXP (pending_mem, 0), XEXP (pending, 0),
+				sched_deps_info->generate_spec_deps
+				? BEGIN_DATA | DEP_TRUE : DEP_TRUE);
 
-	    pending = XEXP (pending, 1);
-	    pending_mem = XEXP (pending_mem, 1);
-	  }
+		pending = XEXP (pending, 1);
+		pending_mem = XEXP (pending_mem, 1);
+	      }
 
-	for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
-	  {
-	    if (! JUMP_P (XEXP (u, 0)))
-	      add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
-	    else if (deps_may_trap_p (x))
+	    for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
 	      {
-		if ((sched_deps_info->generate_spec_deps)
-		    && sel_sched_p () && (spec_info->mask & BEGIN_CONTROL))
+		if (! JUMP_P (XEXP (u, 0)))
+		  add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
+		else if (deps_may_trap_p (x))
 		  {
-		    ds_t ds = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
-					    MAX_DEP_WEAK);
-
-		    note_dep (XEXP (u, 0), ds);
+		    if ((sched_deps_info->generate_spec_deps)
+			&& sel_sched_p () && (spec_info->mask & BEGIN_CONTROL))
+		      {
+			ds_t ds = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
+						MAX_DEP_WEAK);
+
+			note_dep (XEXP (u, 0), ds);
+		      }
+		    else
+		      add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
 		  }
-		else
-		  add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
 	      }
 	  }
 
@@ -2473,7 +2504,6 @@ sched_analyze_2 (struct deps *deps, rtx 
         if (!deps->readonly)
           add_insn_mem_dependence (deps, true, insn, x);
 
-	/* Take advantage of tail recursion here.  */
 	sched_analyze_2 (deps, XEXP (x, 0), insn);
 
 	if (cslr_p && sched_deps_info->finish_rhs)
@@ -2773,6 +2803,9 @@ sched_analyze_insn (struct deps *deps, r
 	  struct deps_reg *reg_last = &deps->reg_last[i];
 	  add_dependence_list (insn, reg_last->sets, 1, REG_DEP_ANTI);
 	  add_dependence_list (insn, reg_last->clobbers, 1, REG_DEP_ANTI);
+
+	  if (!deps->readonly)
+	    reg_last->uses = alloc_INSN_LIST (insn, reg_last->uses);
 	}
       CLEAR_REG_SET (reg_pending_uses);
 
@@ -3505,7 +3538,8 @@ remove_from_deps (struct deps *deps, rtx
   
   removed = remove_from_both_dependence_lists (insn, &deps->pending_read_insns,
                                                &deps->pending_read_mems);
-  deps->pending_read_list_length -= removed;
+  if (!DEBUG_INSN_P (insn))
+    deps->pending_read_list_length -= removed;
   removed = remove_from_both_dependence_lists (insn, &deps->pending_write_insns,
                                                &deps->pending_write_mems);
   deps->pending_write_list_length -= removed;
Index: gcc/haifa-sched.c
===================================================================
--- gcc/haifa-sched.c.orig	2009-10-15 03:32:16.000000000 -0300
+++ gcc/haifa-sched.c	2009-10-15 03:48:25.000000000 -0300
@@ -1688,6 +1688,39 @@ schedule_insn (rtx insn)
      should have been removed from the ready list.  */
   gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
 
+  /* Reset debug insns invalidated by moving this insn.  */
+  if (MAY_HAVE_DEBUG_INSNS && !DEBUG_INSN_P (insn))
+    for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
+	 sd_iterator_cond (&sd_it, &dep);)
+      {
+	rtx dbg = DEP_PRO (dep);
+
+	gcc_assert (DEBUG_INSN_P (dbg));
+
+	if (sched_verbose >= 6)
+	  fprintf (sched_dump, ";;\t\tresetting: debug insn %d\n",
+		   INSN_UID (dbg));
+
+	/* ??? Rather than resetting the debug insn, we might be able
+	   to emit a debug temp before the just-scheduled insn, but
+	   this would involve checking that the expression at the
+	   point of the debug insn is equivalent to the expression
+	   before the just-scheduled insn.  They might not be: the
+	   expression in the debug insn may depend on other insns not
+	   yet scheduled that set MEMs, REGs or even other debug
+	   insns.  It's not clear that attempting to preserve debug
+	   information in these cases is worth the effort, given how
+	   uncommon these resets are and the likelihood that the debug
+	   temps introduced won't survive the schedule change.  */
+	INSN_VAR_LOCATION_LOC (dbg) = gen_rtx_UNKNOWN_VAR_LOC ();
+	df_insn_rescan (dbg);
+
+	/* We delete rather than resolve these deps, otherwise we
+	   crash in sched_free_deps(), because forward deps are
+	   expected to be released before backward deps.  */
+	sd_delete_dep (sd_it);
+      }
+
   gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
   QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
 
@@ -1712,6 +1745,12 @@ schedule_insn (rtx insn)
 	 advancing the iterator.  */
       sd_resolve_dep (sd_it);
 
+      /* Don't bother trying to mark next as ready if insn is a debug
+	 insn.  If insn is the last hard dependency, it will have
+	 already been discounted.  */
+      if (DEBUG_INSN_P (insn) && !DEBUG_INSN_P (next))
+	continue;
+
       if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
 	{
 	  int effective_cost;      
-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]