This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Fourth Draft "Unsafe fp optimizations" project description.


In article <3B7668F6.CF732D77@moene.indiv.nluug.nl> you write:
>
>However, for `sin' and `cos' this is different; the instructions might
>not be as accurate for all inputs as their library counterparts.

They are as "accurate" - it's just that they have a more limited
argument range.

The built-in x87 instruction just gives up completely when the argument
gets close to some arbitrary limit - with modern x87 implementations
the magic limit is |x| > 2**63. 

>I mention this issue below under "Open issues", because I currently have
>no idea how to deal with this.

Why not just deal with it under "limited argument range".

You actually already have _two_ cases already that are about something
very similar:

>  3. Rearrangements whose effect is a loss of accuracy on a large subset of
>     the inputs and a complete loss on a small subset of the inputs.
>
>  4. Rearrangements whose effect is a loss of accuracy on half of the inputs
>     and a complete loss on the other half of the inputs.

I personally think (3) and (4) are exactly the same.  I don't agree with
your "large part" vs "half" distinction - there is no such thing as
"half" of the floating point numbers except as a "half of the bit
representations" kind of thing, but that is completely meaningless. 

It's a matter of loss of range, nothing more.  It's not that "half the
numbers randomly lose inputs".  With these kinds of transformations you
usually lose on the order of _one_ bit of the range of the exponent. 
Nothing more, nothing less.  Sure, that's "half the numbers", but let's
face it, it tends to be a rather extreme "half". 

The same is true of the "sin()"/"cos()" optimization.  Admittedly, you
lose a lot more range, in the sense that your maximum exponent goes from
eleven bits to six bits.  But that only happens at the positive range -
you still have 11 bits of exponent for negative exponents, so again you
could call it "half the numbers". 

Note that with sin/cos, you've really lost all accuracy long before you
hit 2**63.  You have basically no bits left in the <1 range, which means
that any value you get out of sin/cos by that time is nothing but white
noise.  So the only problem with the optimization is really that the
failure case is a bit _too_ abrupt (sin(2**64) isn't _really_ very close
to 2**64 ;)

So bundle them all up under the heading

  3. Rearrangements whose effect is a loss of dynamic range in the
     inputs.

and maybe with a few examples to show what the loss is, and what the
range is ("a/b/c -> a/b*c potentially loses one bit of exponent range",
"sin(x) limits the input range to values that have any accuracy left in
the output" etc).

		Linus


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]