This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: No documentation of -fsched-pressure-algorithm


nick clifton <nickc@redhat.com> writes:
>>> Also - why shouldn't it be a user-level option ?  In my experience gcc's
>>> instruction scheduling tends to be very sensitive to the algorithm being
>>> compiled.  For some applications it does a good job, but for others it
>>> actually makes the performance worth.  Being able to tune the behaviour
>>> of the scheduler (via the various -fsched-... options) has been very
>>> helpful in getting good benchmark results.
>>
>> But the idea is that if you get good benchmark results with it
>> (as for A8, A9 and s390), you should enable it by default.
>
> OK, but what if it turns out that the new algorithm improves the 
> performance of some benchmarks/applications, but degrades others, within 
> the same architecture ?  If that turns out to be the case (and I suspect 
> that it will) then having a documented command line option to select the 
> algorithm makes more sense.

That was my point though.  If that's the situation, we need to find
out why.  We shouldn't hand the user a long list of options and tell
them to figure out which ones happen to produce the best code.

> Besides, this is open software.  Why exclude toolchain users just 
> because they are not developers.  Not everyone who uses gcc reads the 
> gcc mailing list, but they still might be interested in trying out this 
> option and maybe pinging the maintainer of their favourite backend if 
> they find that the new algorithm makes for better instruction scheduling.

Well, given the replies from you, Ian and Vlad (when reviewing the patch),
I feel once again in a minority of one here :-) but... I just don't
think we should be advertising this sort of stuff to users.  Not because
I'm trying to be cliquey, but because any time the user ends up having
to use stuff like this represents a failure on the part of the compiler.

I mean, at what level would we document it?  We could give a detailed
description of the two algorithms, but there should never be any need
to explain those to users (or for the users to have to read about them).
And there's no guarantee we won't change the algorithms between releases.
So I suspect we'd just have documentation along of the lines of
"here, we happen to have two algorithms to do this.  Treat them as
black boxes, try each one on each source file, and see what works
out best."  Which isn't particularly insightful and not IMO a good
user interface.  I like to think the reasons for using the new
algorithm are more understood than that; see the long essay at
the head of the submission.

I remember while at Red Hat that one customer had accumulated a list of
about 20 (or so it seemed) options as their standard set.  Of course,
that set had been chosen based on an earlier compiler and no longer
performed as well as a much smaller set.  But they should never have
felt the need to use such a big set in the first place.  (And I think it
was actually based on the set used for official EEMBC results on that
architecture.  Which made sense.  The whole point of things like EEMBC
is to measure compiler performance on a particular set of applications,
so if that long list of options was thought to be necessary there,
hen it was natural for the customer to use the same options on
their production code.)

Richard


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]