This is the mail archive of the
mailing list for the GCC project.
Re: question about -ffast-math implementation
- From: Tim Prince <n8tm at aol dot com>
- To: gcc at gcc dot gnu dot org
- Date: Mon, 02 Jun 2014 07:23:49 -0400
- Subject: Re: question about -ffast-math implementation
- Authentication-results: sourceware.org; auth=none
- References: <CAKKZoUUtBdU_ZR61hsztymquqDrRkqEUoxeiEE0kzmrpUq+Emg at mail dot gmail dot com> <CAO9iq9ENWmnEwf1HbcK4c3y8VYZP7w0W_0vL5ypDECTP5e4a=g at mail dot gmail dot com> <CA+=Sn1=KPEt-HL48CXOYVbQdoZe4FRSeqoA8PdcUBePrVjExxg at mail dot gmail dot com>
- Reply-to: tprince at computer dot org
On 6/2/2014 3:00 AM, Andrew Pinski wrote:
On Sun, Jun 1, 2014 at 11:09 PM, Janne Blomqvist
On Sun, Jun 1, 2014 at 9:52 AM, Mike Izbicki <email@example.com> wrote:
I'm trying to copy gcc's behavior with the -ffast-math compiler flag
into haskell's ghc compiler. The only documentation I can find about
it is at:
I understand how floating point operations work and have come up with
a reasonable list of optimizations to perform. But I doubt it is
My question is: where can I find all the gory details about what gcc
will do with this flag? I'm perfectly willing to look at source code
if that's what it takes.
In addition to the official documentation, a nice overview is at
Useful, thanks for the pointer
I find it difficult to remember how to reconcile differing treatments by
gcc and gfortran under -ffast-math; in particular, with respect to
-fprotect-parens and -freciprocal-math. The latter appears to comply
with Fortran standard.
Though for the gory details and authoritative answers I suppose you'd
have to look into the source code.
Also, are there any optimizations that you wish -ffast-math could
perform, but for various architectural reasons they don't fit into
There are of course a (nearly endless?) list of optimizations that
could be done but aren't (lack of manpower, impractical, whatnot). I'm
not sure there are any interesting optimizations that would be
dependent on loosening -ffast-math further?
Intel tried to add -complex-limited-range as a default under -fp-model
fast=1 but that was shown to be unsatisfactory.
(One thing I wish wouldn't be included in -ffast-math is
-fcx-limited-range; the naive complex division algorithm can easily
lead to comically poor results.)
Which is kinda interesting because the Google folks have been trying
to turn on -fcx-limited-range for C++ a few times now.
Now, with the introduction of omp simd directives and pragmas, we have
disagreement among various compilers on the relative roles of the
directives and the fast-math options.
I've submitted PR60117 hoping to get some insight on whether omp simd
should disable optimizations otherwise performed by -ffast-math.
Intel made the directives over-ride the compiler line fast (or
"no-fast") settings locally, so that complex-limited-range might be in
effect inside the scope of the directive (no matter whether you want
it). They made changes in the current beta compiler, so it's no longer
practical to set standard-compliant options but discard them by pragma
in individual for loops.