This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: killed schedule groups?


Richard Henderson <rth@redhat.com> writes:

> On Thu, Jan 16, 2003 at 03:56:21PM -0500, Vladimir Makarov wrote:
> >   It has a sensible effect even on the 1st insn scheduling quality (at
> > least for itanium).  The schedule group could be a few insns.  They can
> > not be issued on one cycle in many cases.  The scheduler look at them as
> > one insn and issue usually on the same cycle.  As consequence we have
> > inaccuarcy in the simulated time.  It is the most important factor of
> > insn scheduling.
> 
> Well, we have another problem there: the scheduler doesn't
> understand that a CALL_INSN goes somewhere else, executes
> other code for many cycles, and then starts the next insn
> on a brand new cycle.
> 
> Not sure how to address this, exactly.  I think it wouldn't
> hurt to advance the pipeline state by, say, 10 cycles when
> we see a call insn.  Another way might be to clear the 
> entire pipeline state and add all insns queued on long
> latency insns already issued to the ready queue.

Most routines end with a fairly-constant sequence, on RISC machines
usually involving a bunch of loads and a 'return' instruction.  It'd
be nice if we could put that knowledge into the scheduler...

On PPC, on return from a large function, you can be pretty certain
that the load/store unit is busy and nothing else.  The processor will
fold the return instruction fast enough that the pipeline isn't
completely clear.

-- 
- Geoffrey Keating <geoffk@geoffk.org>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]