This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC] Fix PR28684


On Tue, 14 Nov 2006, Revital1 Eres wrote:
> > Perhaps pragmatically, reciprocal-math-optimizations can be those
> > that strength reduce divisions into (an equal number of)
> > multiplications.
>
> but  x / y   into  x * (1/y) seems to not fit it which might hurt
> targets which support reciprocal but not division operations (i.e.
> Altivec).
>
> I still prefer the former definition... but that just me :-)

I'm trying to constructively work towards some kind of resolution.
To me the fundamental issue is why not use "-ffast-math"?

Some of the arguments have focused on "because I don't understand which
transformations that enables".  The ignorance of users is not normally
a great motivation for change.  If someone can't come up with a concrete
example of a problematic transformation, then there's probably no reason
not to trust GCC's optimizers.

Clint Whaley's bug report, PR middle-end/28684, survived not being
immediately closed a "not-a-bug/wont-fix" because if provided an
interesting use case.  When benchmarking hardware on numerical kernels,
it's useful to establish an Mflop number, which is the number of
floating point operations.  This allows several forms of associativity,
but not the full range of -ffast-math.  This is a well-defined problem,
useful to a small group of specialists, and it seems reasonable to
support it.


Your interest from the point of view of vectorization is unrelated,
and not clearly defined.  For better or worse, historically GCC's
attitude to floating point optimizations is to only transformations
that produce bit-identical results by default.  A constraint much
stronger than other compilers.  Anything less than this has gone under
the name of "unsafe" math optimizations.  Any numerical expert will
tell you only have to ignore sign depenendent rounding, or re-associate
an expression to produce a last bit error, which after only a few
additional steps can make the results uncomparable.  The numeric
errors you get from ignoring the sign of zeros, are no different in
magnitude than converting divisions into multiplications, or implementing
pow by multiplications, or using extra precision on x87, or use of
fmadd instructions.

On this scale, there are very few vectorization transformations that
GCC would consider "safe" and produce bit identical results.  A sad
fact is the vectorizaton is what GCC calls "unsafe".  The semantics
and wording you prefer in your patch allows for an unbounded error in
the result, so the distinction can't be a "quality" argument.


So we now need to consider what semantics it is that you are trying
to address.  You wish to allow "X/Y" as "(1/Y)*X", but ultimately
how is that any different from plain -ffast-math, which should be
used routinely by GCC users.  Perhaps rather than say we'd like a
mysterious new option to include transformation FOO or BAR, perhaps
we need to look at it the other way and ask which transformations
do you want to disallow.  If there are none, then the current
defintion should work well for you.


I think there's still some merit in Clint's request for an Mflop
preserving subset of -ffast-math, which is PR28684, but I see any
thing else as solving a different (and perhaps irrelevant) problem.

I agree the wording (and naming) of -funsafe-math-optimizations
needs to be improved.  In my mind, -funsafe-math-optimizations is
restricted to all of the mathematically valid transformations that
are permissable assuming unbounded precision arithmetic.  Things
like x+0 -> x.  Unfortunately we live in a world where the
limitations of our (IEEE) hardware mean that arithmetic performed
in a computer doesn't match or perfectly model a Newtonian universe.


For example, Richard Gunether has recently proposed a patch to
transform pow(x,1.5) into x*sqrt(x), but didn't appreciate why
the transformation was guarded by -funsafe-math-optimizations.
The answer is that although the two expressions are equivalent
mathematically, and the later is not only faster but may often be
more accurate on most inputs, they are not guaranteed to be identical.
Hence codes that assume "y = 1.5; if (pow(x,1.5) == pow(x,y))" may
start to fail.  Even though the numerical accuracy has improved,
we disallow this transformation.  Indeed both Robert Scott Ladd,
my own OpenEye experience, an other gcc postings have confirmed that
numerical accuracy is usually improved, but at the expense of
numerical precision.

http://en.wikipedia.org/wiki/Accuracy_and_precision

Perhaps we should rename this option -faccurate-math and describe
the default as -fprecise-math. :-)


I'm a bit disappointed that neither the ATLAS folks nor yourself have
yet articulated a strong functionality request.  I appreciate that you're
somehow unhappy with -ffast-math, but apart from the Mflops argument
you've failed to put your finger on precisely (or exactly :-) what
about it you believe needs fixing.  Even in the Mflops argument is
seems ambiguous whether operations of constant arguments may be evaluated
at compile-time, "2.0 + 3.0 -> 5.0"?


Anyway, I'm pleased that we're discussing the issues.

Roger
--


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]