This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] avoid scheduling volatiles across sequence points


OK. My confidence that this is the right way to go is very low, but I
thought that I'd send it here first.

So, the testcase:

volatile int flag = 0;

extern struct {
  int mode;
  int offset;
  int size;
} buffer[];

void foo(int i, int track)
{
  int parent;
  int check;

  flag = 1;
  buffer[i].offset = track * buffer[parent].size;
  flag = 0;
}

Now, this doesn't trigger on x86, but does on every mips I could test,
as well as ppc.

Effectively the scheduler takes the code from:

<volatile write>
<seq point>
<volatile write>

to:

<seq point>
<volatile write>
<volatile write>

Which is, of course, incorrect. Now, in looking through the scheduling
code I saw in sched_analyze_2 some code that checked when we were
reading memory locations, but nothing for writing, so I added a check in
sched_analyze_1 to check there.

Comments?

-eric


-- 
I will not carve gods

Index: sched-deps.c
===================================================================
RCS file: /cvs/cvsfiles/devo/gcc/sched-deps.c,v
retrieving revision 1.12
diff -u -p -w -r1.12 sched-deps.c
--- sched-deps.c	2002/02/19 03:23:52	1.12
+++ sched-deps.c	2002/05/31 01:07:30
@@ -652,6 +652,9 @@ sched_analyze_1 (deps, x, insn)
 	  XEXP (t, 0) = cselib_subst_to_values (XEXP (t, 0));
 	}
 
+      if (MEM_VOLATILE_P (dest))
+	reg_pending_barrier = true;
+
       if (deps->pending_lists_length > MAX_PENDING_LIST_LENGTH)
 	{
 	  /* Flush all pending reads and writes to prevent the pending lists


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]