This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: What is acceptable for -ffast-math? A numerical viewpoint


dewar@gnat.com wrote:
> 
> Now if you say this is "blatantly extreme" or "gratutious"
> 
> and that what you really meant to say was
> 
> A compiler is allowed to perform any transformation that would be valid
> in real arithmetic unless the transformation is blatantly extreme or
> gratuitous.
> 
> then you have not helped much, since you have not defined what exactly
> you mean by blatantly extreme or gratuitous, and in fact this is what
> the whole thread is about.

Actually, what I should have said was "allowed to perform any
_optimisation_ that would be valid in real arithmetic". Since the whole
thread was about optimisation options, I simply took it for granted that
transformations that didn't have any plausible optimisation value were
out of consideration. I guess I should have known better.

("When something goes without saying, it's been my experience that
you're better off if you say it." -- Carolyn Kaiser)

Can we please take it as given that no transformation will be applied
without something at least vaguely resembling a rational argument that
it could be an actual optimisation?

> The point was not that the NR loop can be disrupted by reasonable
> transformations like extra precision on an ia32, but rather to point
> out the much more radical transformation that recognizes that this
> loop will never terminate in real arithmetic.

You're right in that I misunderstood the example, but now that I see
what you meant I still say the example supports my point and not yours.
If I was aware that the compiler was going to perform pretend-its-real
transformations, I wouldn't write a loop that expects to converge on a
single value.

Expecting it to converge relies on precise knowledge of the FP model. It
would be a reasonable thing to do in the presence of -mieee (under
certain conditions). It would not be a reasonable thing to do in the
presence of -fpretend-its-real. The reasonable thing to do would be to
use "Cray-safe algorithms" that can be trusted to be robust against FP
perturbations.

In this particular example, I might test for convergence to within some
(carefully chosen) epsilon. Or perhaps I'd analyse the range of actual
parameters I expected to use and see if I could arrive at a reasonable
fixed constant for the number of iterations (which gives the additional
advantage of potentially allowing the loop to be unrolled and avoid
branches altogether).

> Of course that transformation
> is indeed gratuitous and blatantly extreme.

Actually, I don't agree that it is. Granted an _infinite_ loop is a bit
dubious, but I'd view that as the programmer's fault for not being aware
of what they've asked the compiler to do, not as the compiler being
over-aggressive. Certainly similar transformations on finite loops,
along the lines of loop unrolling and partial evaluation (or even
complete evaluation if the arguments are known constants) at compile
time, are entirely reasonable things to do (modulo the suitability of
loop unrolling to the particular FPU architecture).

If the optimiser that was capable of such things behaved pathologically
in the presence of a theoretically infinite loop written by a naive
programmer, I'd file it under "serves you right".

Hmmm ... looking back at what I just wrote, it suddenly strikes me that
all the discussion about two different classes of programmers in this
thread so far has been misguided. There aren't two kinds of programmer
under consideration here; there are _three_:

(1) Programmers who understand the nuances of FP pretty well and want
predictable behaviour according to a known model, because they've
analysed their problems carefully and designed their algorithms that
way.

(2) Programmers who understand the nuances of FP pretty well and want
speed at the expense of predictable behaviour at the ULP level, because
they've analysed their problems carefully and designed their algorithms
that way.

(3) Programmers who don't know FP intimately and write their code
without careful analysis, and then wonder why (1.0/5.0)*5.0 != 1.0.

Most of the arguments here have followed from a failure to properly
recognise the difference between groups 2 and 3. I don't think we've
heard from anybody in group 3 here (hardly surprising, given the nature
of the list; presumably they're the overwhelming majority Out There),
but some of us in group 2 are getting tired of being treated as though
we were in group 3.

-- 
Ross Smith <ross.s@ihug.co.nz> The Internet Group, Auckland, New Zealand
========================================================================
"Unix has always lurked provocatively in the background of the operating
system wars, like the Russian Army."                  -- Neal Stephenson


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]