[RFC] Fix PR28684
Roger Sayle
roger@eyesopen.com
Mon Nov 13 15:17:00 GMT 2006
Hi Revital (and Clint),
On Mon, 13 Nov 2006, Revital1 Eres wrote:
> Regarding PR28684 - we were re-thinking of the current definition of the
> flags (-fassociative\reciprocal-math) and decided that instead of
> thinking of those flags in terms of arithmetic laws we refer to their
> semantics. Thus -fassociative-math will be defined as follows:
>
> -fassociative-math:
> Allow optimization for floating-point arithmetic which may change the
> result of the operation due to rounding.
> NOTE: may reorder or strength reduce floating-point comparisons as
> well, and so may not be used when ordered comparisons are required.
> For example x + x + x can be transformed to 3 * x.
Hmmm, I've an alternative suggestion for PR28684.
How about splitting some subset -funsafe-math-optimizations into a
single flag called -fassociative-math-optimizations whose definition
is that subset of "unsafe" that doesn't change the number or operator
kind of binary floating point operations.
For virtually, all users -ffast-math is the the correct optimization
flag use to get efficient floating point code. (It's a pity this
isn't the -O default as it is with most commericial compilers). However,
for PR28684 it turns out there are a subset of numerical programmers
who are interested in hardware Mflop count. In this usage, operations
may be significantly reordered, but it still isn't legitimate to turn
x+x+x into x*3 (the first requires 2 flops, the second requires only
one).
It turns out that this may be related to the middle-ends use of
flag_signaling_nans and HONOR_SNANS, where because an application
may count the number of floating point operations applied to a
sNaN, we attempt to avoid optimizations that would affect that
count.
In fact, this may be exactly like signaling NaNs where negation and
fabs may be assumed not to be a trapping operations, I suspect the
"official" notion of Mflops benchmarking doesn't consider these a
"flop". But I may be wrong, Clint?
Concepts like -freciprocal-math whilst interesting, are little
more than an arbitrary reclassification of existing unsafe math
optimizations. They also blur the boundaries of which optimizations
do we consider associative and which do we consider reciprocal,
and some transformations may be difficult to categorize etc...
Is C1/(X*C2) into (C1/C2)/X reasonable?
Perhaps pragmatically, reciprocal-math-optimizations can be those
that strength reduce divisions into (an equal number of)
multiplications.
This pragmatic definition should also play well with the needs of
vectorizors, inluding reductions.
Aside:
It is unfortunate the stigma attached to the word "unsafe-math"
[I am a freedom fighter, you are a rebel, he is a terrorist :-)]
I wonder if the fact that the Intel compiler performs what GCC considers
unsafe transformations by default, and has no flag for disabling them
means that it can't be blessed by ATLAS' certification procedures? ;-)
See for example http://gcc.gnu.org/ml/gcc/2004-03/msg01068.html
I'd very interested if there was some formal description of what this
certification involves. It appears odd that it's possible to verify
DO-178B code using -ffast-math without a new compiler option, but
that numerical applications need to understand the validity of the
tools, rather than by analysis/testing of the code they produce.
Presumably this means that only Open Source compilers can be validated
as commercial vendors never list the complete set of transformations
they apply.
I'm beginning to warm to the idea of splitting -funsafe-math-optimzations
into -fassociative-math-optimizations and -fother-math-optimizations, both
enabled by -ffast-math, and then completely deprecating all options/flags
containing the word "unsafe".
Roger
--
More information about the Gcc-patches
mailing list