This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Overwhelmed by GCC frustration


On 31/07/17 17:12, Oleg Endo wrote:
> On Mon, 2017-07-31 at 15:25 +0200, Georg-Johann Lay wrote:
>> Around 2010, someone who used a code snipped that I published in
>> a wiki, reported that the code didn't work and hang in an
>> endless loop.  Soon I found out that it was due to some GCC
>> problem, and I got interested in fixing the compiler so that
>> it worked with my code.
>>
>> 1 1/2 years later, in 2011, [...]
> 
> I could probably write a similar rant.  This is the life of a
> "minority target programmer".  Most development efforts are being
> done with primary targets in mind.  And as a result, most changes
> are being tested only on such targets.
> 
> To improve the situation, we'd need a lot more target specific tests
> which test for those regressions that you have mentioned.  Then of
> course somebody has to run all those tests on all those various
> targets.  I think that's the biggest problem.  But still, with a
> test case at hand, it's much easier to talk to people who have
> silently introduced a regression on some "other" targets.  Most of
> the time they just don't know.

It's a fundamental problem for compilers, in general: every
optimization pass wants to be the last one, and (almost?) no-one who
writes a pass knows all the details of all the subsequent passes.  The
more sophisticated and subtle an optimization, the more possibility
there is of messing something up or confusing someone's back end or a
later pass.  We've seen this multiple times, with apparently
straightforward control flow at the source level turning into a mess
of spaghetti in the resulting assembly.  But we know that the
optimization makes sense for some kinds of program, or at least that
it did at the time the optimization was written.  However, it is
inevitable that some programs will be made worse by some
optimizations.  We hope that they will be few in number, but it
really can't be helped.

So what is to be done?  We could abandon the eternal drive for more
and more optimizations, back off, and concentrate on simplicity and
robustness at the expens of ultimate code quality.  Should we?  It
would take courage, and there will be an eternal pressume to improve
code.  And, of course, we'd risk someone forking GCC and creating the
"superoptimized GCC" project, starving FSF GCC of developers.  That's
happened before, so it's not an imaginary risk.

-- 
Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]