This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: What is acceptable for -ffast-math? A numerical viewpoint

<<And you're still asserting that and simply expecting me to take your
word for it, presumably because you're a Real Expert and I'm a mere peon
who writes (ugh) games. I keep asking you to provide evidence for that
assertion and you keep failing to do so.

OK, my assertion is that 

IF you accept the principle that the compiler is allowed yto perform
any transformation that would be valid in real arithmetic,

THEN, all programs can raise overflow right away.

Why? Because you can transform

 a / b to 

 (a * K) / (b * K)

for any large K, causing overflow. If you go back over previous notes, you
will see how that could in fact occur in practice.

Now if you say this is "blatantly extreme" or "gratutious"

and that what you really meant to say was

A compiler is allowed to perform any transformation that would be valid
in real arithmetic unless the transformation is blatantly extreme or

then you have not helped much, since you have not defined what exactly
you mean by blatantly extreme or gratuitous, and in fact this is what
the whole thread is about. Not everyone is going to agree on what is
BE or GR, and we have to discuss specific examples to reach a consensus.
Any extreme position here is unsupportable, and there is no simple
principle to set a dividing line.

<<"If analysis shows this is reliable." But your example was one where it
was blatantly obvious that it _wasn't_ reliable. It's the same old
problem again: you keep coming up with examples that rely on the
assumption that I'm incapable of doing the analysis.

No, you apparently missed the point of the NR example.

If the arithmetic is predictable, and the compiler maps the high level
language into the set of IEEE operations that could reasonably be expected
(A criterion met by many compilers), then the analysis is straightforward,
and indeed this is very common coding, and quite correct under the
assumption of "sensible" arithmetic.

The point was not that the NR loop can be disrupted by reasonable
transformations like extra precision on an ia32, but rather to point
out the much more radical transformation that recognizes that this
loop will never terminate in real arithmetic. Of course that transformation
is indeed gratuitous and blatantly extreme. I only gave the example to
point out that the absolute real arithmetic standard, though well defined
is unsupportable.

Indeed the example served its purpose, since you are no longer taking the
absolute real arithmetic rule as gospel, you have added the
gratuitous and blatantly extreme exceptions.

So we are once more moved away from the (useless) excercise of trying to
establish simple predicates here, and arrive back at a subjective 
standard which will require case-by-case discussion (which was my point
in the first place).

<<the code and serve no plausible optimisation purpose (e.g. multiplying
both sides by some large factor), and (and I want to emphasise this yet
again because it's the root of your misapprehension) _without assuming
that the coder doesn't understand floating point as well as you do_.

Two more points, please reread my previous example which showed that 
multiplying both sides by a large factor *can* serve some plausibvle
optimization purpose.

Second, the issue is not how well the coder understands floating-point,
it has to do with trying to come up with a well defined set of transformations
that your coder who understands floating-point well can deal with.

Being an expert in floating-point arithmetic is pretty useless if you haven't
a clear definition of the floating-point model you are working with.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]