This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Remove sel-sched?


On Fri, Jan 15, 2016 at 10:48 AM, Andrey Belevantsev <abel@ispras.ru> wrote:
> On 14.01.2016 20:26, Jeff Law wrote:
>>
>> On 01/14/2016 12:07 AM, Andrey Belevantsev wrote:
>>>
>>> Hello Bernd,
>>>
>>> On 13.01.2016 21:25, Bernd Schmidt wrote:
>>>>
>>>> There are a few open PRs involving sel-sched, and I'd like to start a
>>>> discussion about removing it. Having two separate schedulers isn't a
>>>> very
>>>> good idea in the first place IMO, and since ia64 is dead, sel-sched gets
>>>> practically no testing despite being the more complex one.
>>>>
>>>> Thoughts?
>>>
>>>
>>> Out of the PRs we have, two are actually fixed but not marked as such.
>>> This year's PRs are from the recent Zdenek's Debian rebuild with GCC 6
>>> and I will be on them now.  For the other two last year PRs, it is my
>>> fault not to fix them in a timely manner.  Frankly, 2015 was very tough
>>> for me and my colleagues (we worked 6 days a week most part of the
>>> year), but since January it is fine again and we'll catch up now.  Sorry
>>> for that.
>>>
>>> You're also right that sel-sched now gets limited testing.  We're made
>>> it work initially for ia64, x64, ppc and cell, and then added ARM, too.
>>> Outside of ia64 world, I had private reports of sel-sched being used for
>>> cell with success, and we used it in our own contractor work for
>>> optimizing some ARM apps with GCC.
>>>
>>> In short, we're willing to maintain sel-sched and I apologize for the
>>> slow PR fixing speed last year, it should be no problem anymore as of
>>> now.  If there are any big plans of reorganizing schedulers and
>>> sel-sched stands in the way of those, let's discuss it and we'll be
>>> willing to help in any way.
>>
>> FWIW, I've downgraded the sel-sched stuff to P4 for this release given how
>> that scheduler is typically used (ia64, which is a dead platform).
>>
>> I think the bigger question Bernd is asking here is whether or not it
>> makes
>> sense to have multiple schedulers.  In an ideal world we'd bake them off
>> select the best and deprecate/remove the others.
>>
>> I didn't follow sel-sched development closely, so forgive me if the
>> questions are simplistic/naive, but what are the main benefits of
>> sel-sched
>> and is it at a point (performance-wise) where it could conceivably replace
>> the aging haifa scheduler infrastructure?
>
>
> The main sel-sched points at the time of its inclusion were as follows:
> bookkeeping code support (move an insn between any blocks in the scheduling
> region), insn transformations support (renaming, unification, substitution
> through register copies), scheduling at several points at once, pipelining
> support.  Together it paid off with something like 7-8% on SPEC at the time
> on ia64, but not so on the other archs, where we didn't spend much time for
> tuning and usually got both ups and downs compared to haifa.  On ia64 the
> speedup was mostly because of pipelining with speculation, as far as I
> recall, for others including ARM renaming and substitution were useful.
>
> Since then, Vlad and Bernd put more improvements to the haifa scheduler,
> including sched pressure, predication and backtracking, so both schedulers
> now have features not present in the other one and the initial feature
> advantage somewhat wore off.
>
> Also, the big problem of sel-sched is speed -- it is slow because the
> dependency lists are not maintained through the scheduler, most of
> transformation stuff is implemented through an insn movement up the region
> and looking what should happen to allow insn A move up through insn B. I've
> done most of I could imagine to speed it up but haven't managed making
> sel-sched by default on -O2.
>
> So to sum this up, I don't think sel-sched can replace haifa in its current
> state.  These days to speed up the scheduler I'd add something like path
> based dependency tracking with bit vectors like it is done in Intel's
> wavefront scheduling, though it is patented (Vlad may correct me here). Or,
> we need to devise other means of keeping dependencies up to date. We've
> tried that but never got it working good enough.
>
> The thing I would not like to lose is sel-sched pipelining.  It can work on
> any loops, not only countable ones like modulo scheduling, and this can make
> a difference for some apps even outside of ia64.  But if one basic scheduler
> is desired, maybe the better use of our resources will be to improve modulo
> scheduling instead to not lose pipelining capabilies in gcc.  It is
> completely unmaintained now, my colleague Roman Zhuykov had a couple of
> improvements ~4yrs ago but most of them never got into trunk due to lack of
> review.  He can step up as a modulo-sched maintainer if needed, the code is
> alive (see PR69252).

Btw, I'd like people to start thinking if the scheduling algorithms
working on loops
(and sometimes requiring unrolling of loops) can be implemented in a way to
apply that unrolling on the GIMPLE level (not the scheduling itself of course).
Thus have an analysis phase (for the unrolling compute) that can be shared
across ILs.

Scheduling of loads / stores might still happen on GIMPLE if we have a good
enough idea of register pressure - I remember we jumped throuhg hoops in the
past to get better dependence info on RTL for this (ddr export to RTL, never
merged).

Basically unrolling on RTL should go away.

Richard.

> Sorry for a long mail :)
>
> Andrey
>
>>
>> Jeff
>
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]