This is the mail archive of the
mailing list for the GCC project.
Re: Overwhelmed by GCC frustration
- From: Eric Gallager <egall at gwmail dot gwu dot edu>
- To: Richard Biener <richard dot guenther at gmail dot com>
- Cc: Andrew Haley <aph at redhat dot com>, Oleg Endo <oleg dot endo at t-online dot de>, Georg-Johann Lay <avr at gjlay dot de>, GCC Development <gcc at gcc dot gnu dot org>
- Date: Tue, 1 Aug 2017 07:08:41 -0400
- Subject: Re: Overwhelmed by GCC frustration
- Authentication-results: sourceware.org; auth=none
- References: <597F2FB4.email@example.com> <firstname.lastname@example.org> <email@example.com> <CAFiYyc2nPd18jEcUfQnf4UXOm4WTcS2RVMnMbs1+ezKWD_5obA@mail.gmail.com>
On 8/1/17, Richard Biener <firstname.lastname@example.org> wrote:
> On Mon, Jul 31, 2017 at 7:08 PM, Andrew Haley <email@example.com> wrote:
>> On 31/07/17 17:12, Oleg Endo wrote:
>>> On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
>>>> Around 2010, someone who used a code snipped that I published in
>>>> a wiki, reported that the code didn't work and hang in an
>>>> endless loop. Soon I found out that it was due to some GCC
>>>> problem, and I got interested in fixing the compiler so that
>>>> it worked with my code.
>>>> 1 1/2 years later, in 2011, [...]
>>> I could probably write a similar rant. This is the life of a
>>> "minority target programmer". Most development efforts are being
>>> done with primary targets in mind. And as a result, most changes
>>> are being tested only on such targets.
>>> To improve the situation, we'd need a lot more target specific tests
>>> which test for those regressions that you have mentioned. Then of
>>> course somebody has to run all those tests on all those various
>>> targets. I think that's the biggest problem. But still, with a
>>> test case at hand, it's much easier to talk to people who have
>>> silently introduced a regression on some "other" targets. Most of
>>> the time they just don't know.
>> It's a fundamental problem for compilers, in general: every
>> optimization pass wants to be the last one, and (almost?) no-one who
>> writes a pass knows all the details of all the subsequent passes. The
>> more sophisticated and subtle an optimization, the more possibility
>> there is of messing something up or confusing someone's back end or a
>> later pass. We've seen this multiple times, with apparently
>> straightforward control flow at the source level turning into a mess
>> of spaghetti in the resulting assembly. But we know that the
>> optimization makes sense for some kinds of program, or at least that
>> it did at the time the optimization was written. However, it is
>> inevitable that some programs will be made worse by some
>> optimizations. We hope that they will be few in number, but it
>> really can't be helped.
>> So what is to be done? We could abandon the eternal drive for more
>> and more optimizations, back off, and concentrate on simplicity and
>> robustness at the expens of ultimate code quality. Should we? It
>> would take courage, and there will be an eternal pressume to improve
>> code. And, of course, we'd risk someone forking GCC and creating the
>> "superoptimized GCC" project, starving FSF GCC of developers. That's
>> happened before, so it's not an imaginary risk.
> Heh. I suspect -Os would benefit from a separate compilation pipeline
> such as -Og. Nowadays the early optimization pipeline is what you
> want (mostly simple CSE & jump optimizations, focused on code
> size improvements). That doesn't get you any loop optimizations but
> loop optimizations always have the chance to increase code size
> or register pressure.
Maybe in addition to the -Os optimization level, GCC mainline could
also add the -Oz optimization level like Apple's GCC had, and clang
still has? Basically -Os is -O2 with additional code size focus,
whereas -Oz is -O0 with the same code size focus. Adding it to the
FSF's GCC, too, could help reduce code size even further than -Os
> But yes, targeting an architecture like AVR which is neither primary
> nor secondary (so very low priority) _plus_ being quite special in
> target abilities (it seems to be very easy to mess up things) is hard.
> SUSE does have some testers doing (also) code size monitoring
> but as much data we have somebody needs to monitor it, further
> bisect and report regressions deemed worthwhile. It's hard to
> avoid slow creep -- compile-time and memory use are a similar
> issue here.
>> Andrew Haley
>> Java Platform Lead Engineer
>> Red Hat UK Ltd. <https://www.redhat.com>
>> EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671