This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: GCC beaten by ICC in stupid trig test!

Roger Sayle wrote:
May I remind everyone that the subject title "GCC beaten by ICC in
stupid trig test!" refers to a posting by Scott Robert Ladd...

This thread has wandered a tad far from it's original topic, hasn't it? I also note that my original subject did not contain the word "stupid", which was added by someone who with some badly mistaken notions and a rude manner.

Back to the original topic: Doing a "fair" comparison between ICC and GCC is problematic at best, given that neither compiler provides complete documentation about their various options. For example, it appears the ICC does "unsafe" math by default, leading me to suspect that I should use ICC's -mp or -mp1 switches when comparing against GCC. But I'm not certain if "icc -mp" is *really* equivalent to a plain "gcc" (sand -ffast-math).

At least in the case of GCC, I can study the source code to find every instance where -ffast-math affects code generation... however, the average compiler user has neither the skills or time to examine the compiler source code for indications of its behavior.

What a numerical programmer needs to know is: How, exactly, do all of these switches affect accuracy?

Accuracy benchmarks are few and far between, and many are hoary old codes translated badly from antiquated versions of Fortran. I've found that most of these "accuracy" benchmarks produce identical results with and without -ffast-math; when there are differences, it is trivial, and in one case, -ffast-math actually *improved* accuracy.

Of course, the definition of "accuracy" is somewhat nebulous. For some programs, it is important that identical results be produced on any platform; for other programs, accuracy reflects precision.

And, of course, most programmers forget the mathematical rules about significant digits in source data. If I multiply 1.5 by 3.1415927, the answer is 4.7, not 4.71238905, unless I know for a fact that 1.5 is exact, and not some measurement of a value between 1.49 and 1.51 (for example).

Ah, but I now enter the realm of interval arithmetic, and am drifting from my own topic.

Hence my desire to write a new accuracy benchmark, something I'm doing whenever I have some of that elusive "free time." ;)

Scott Robert Ladd
Coyote Gulch Productions (
Software Invention for High-Performance Computing

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]