This is the mail archive of the
mailing list for the GCC project.
Re: Live range shrinkage in pre-reload scheduling
- From: Vladimir Makarov <vmakarov at redhat dot com>
- To: ramrad01 at arm dot com, Kyrill Tkachov <kyrylo dot tkachov at arm dot com>, "gcc at gcc dot gnu dot org" <gcc at gcc dot gnu dot org>, Maxim Kuvyrkov <maxim dot kuvyrkov at linaro dot org>, Richard Sandiford <rdsandiford at googlemail dot com>
- Date: Thu, 15 May 2014 10:42:17 -0400
- Subject: Re: Live range shrinkage in pre-reload scheduling
- Authentication-results: sourceware.org; auth=none
- References: <5371F395 dot 8050208 at arm dot com> <53736CFD dot 6030402 at redhat dot com> <87d2fgco2v dot fsf at talisman dot default> <CAJA7tRZyXSamdb_jbnX+-HywZEajB=+Y-SzCyHzyTJDyfnhQJw at mail dot gmail dot com>
On 05/15/2014 02:46 AM, Ramana Radhakrishnan wrote:
> On Wed, May 14, 2014 at 5:38 PM, Richard Sandiford
> <email@example.com> wrote:
>> Vladimir Makarov <firstname.lastname@example.org> writes:
>>> On 2014-05-13, 6:27 AM, Kyrill Tkachov wrote:
>>>> Hi all,
>>>> In haifa-sched.c (in rank_for_schedule) I notice that live range
>>>> shrinkage is not performed when SCHED_PRESSURE_MODEL is used and the
>>>> comment mentions that it results in much worse code.
>>>> Could anyone elaborate on this? Was it just empirically noticed on x86_64?
>>> It was empirically noticed on SPEC2000. The practice is a single
>>> criteria for heuristic optimizations. Sometimes a new heuristic
>>> optimization might look promising but the reality might be quite different.
> Vlad - Was that based on experiments on x86_64 ?
Yes, I benchmarked x86 and x86-64. I believe this pass can help when we
don't use the 1st insn scheduler (that is x86/x86-64 case). If the 1st
insn scheduler is profitable, I guess it is better to use
register-pressure insn scheduling than live-range shrinkage + insn
scheduling (even with register pressure).
Even for x86/x86-64 the improvement was quite small for live-range
shrinkage. Therefore it is not a default optimization. There are a set
of optimizations in GCC which can improve some cases worsen others in
practically equal number of cases. They could be a good candidate for
machine-learning option choosing (e.g. MILEPOST project) because it is
hard to predict for human when they help (if it would be easy then we
could switch on these optimizations only for such cases).
I guess somebody could continue work on improving live-range shrinkage
on the scheduler code base. May be there are better approaches to mine