bitwise & optimization

Vincent Diepeveen diep@xs4all.nl
Wed Jun 10 18:52:00 GMT 2015




On Wed, 10 Jun 2015, Manuel López-Ibáñez wrote:

> On 09/06/15 17:44, Vincent Diepeveen wrote:
>> i remember one of the GCC team
>> members showing the middlefinger that they simply wanted to keep intel 
>> ahead of
>> AMD in terms of speed and take care that GCC couldn't rival other compilers 
>> in
>> terms of speed (the implication of not doing this optimization in branchy 
>> codes).
>
> Links or it never happened.
>
> Given the number of non-Intel targets that GCC supports such a claim is 
> beyond extraordinary. Of course, what you cannot expect is Intel developers 
> to work for free on improving AMD support.
>
> And yes, GCC has bugs. I'm sure all of us contributing to GCC would like to 
> see them fixed by tomorrow. Alas, we are simply human.
>
> The advantage of GCC vs. Intel C++ is that with GCC you (and AMD!) can do 
> more than just bitterly complain and fix it (or pay someone to fix it) for 
> the benefit of all humanity, and there is nothing that Intel can do to stop 
> you.

Yeah i should have saved those discussions on a backupped harddrive from 
back in 2007.

When Linus asked why any fix there wouldn't be possible as "nowadays" (it 
was end 2007) there was core2 and K8 and there using CMOV instructions
clearly regurarly would be an advantage to generate 
systematically when that would objectively be faster - then a guy with 
a Polish name responded making it clear to everyone that even Linus 
Thorvalds didn't have the power to move the GCC team there further, 
with as lame excuse that his intel P4 might slow down when using such 
optimizations.

Of course that answer clearly was total BS - yet that was the official 
excuse given and it was june 2007 back then when someone pointed me to 
that discussion.

Note that in some private e-mails start of this century that 
shipped to my e-mail account - Marc Lehmann already predicted difficult times
for GCC ahead with the arrival of x64.

Regrettably he was right in every single prediction.

Naively i wondered back then why this would be such big deal.

Also Eugene Nalimov, working for wintel at the time, predicted 
problems with x64, be it in other respects - technical reasons indicating 
one had to rewrite its OWN code to get faster.

By 2007 it became clear that the problems with GCC were not technical 
related yet that some billion dollar explanation was blocking 
progress.

It's now 2015 and GCC still didn't improve much there.

Find it weird that some years ago i bought Xeons myself because of all 
those compiler problems i saw and embraced Nvidia's CUDA gpgpu technology and assembler 
codes for the prime numbers i run?

Regrettably compilers is one of those fields where former Sovjet Nations, 
especially Russia, had a big bunch of few very good guys - who all want to make big bucks working 
for a commercial company.

That definitely has not benefitted GCC.

Improving something where there is a billion dollar reason (or better a 
yearly 20 billion dollar reason) to not improve - you only want to do that when you
are in charge yourself or it just won't happen - as Linus postings the 
past 8 years have proven with respect to GCC - especially those around
2007.

QED - Quod Erat Demonstrandum - (which had to be proven)

> https://gcc.gnu.org/wiki/GettingStarted#Basics:_Contributing_to_GCC_in_10_easy_steps
> Kind regards,
> Manuel.


More information about the Gcc-help mailing list