This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Fourth Draft "Unsafe fp optimizations" project description.



On Mon, 13 Aug 2001, Geert Bosch wrote:
>
> On Sun, 12 Aug 2001, Linus Torvalds wrote:
>   Hmm. As far as I know, the library routines at least in glibc do not add
>   any accuracy.
>
>   The library routines _do_ add:
>    - "errno" handling
>    - extended range by reduction of the argument size
>   but not accuracy.
>
> The way argument reduction is done affects accuracy for arguments
> whose absolute value is larger than half pi.

Yes. However, the intel hardware does do this correctly for the range of
inputs that it handles, so the accuracy of the hw should be fine within
that range. The i387 (at least in the PPro core) uses an internal value of
PI accurate to 66 bits, which is two bits more than you can do with
extended real.

If you want more details, please check out the "Pentium Processor User's
Manual: Vol 3: Architecture and Programming Manual", which actually has a
separate Appendix G ("Report on Transcendental Functions") on the accuracy
of the functions. It's kind of sad, really: they never did this for the
i486, but with the Pentium Intel wanted to get respectability in the math
world, and did all this extra work. Then came the "fdiv" bug...

They never bothered to re-do the pretty scatter plots etc for the PPro
after that.

Anyway, for those who do not have the manuals, the bottom line is that
they actually are pretty careful, and do reduction in microcode. For all
transcendental functions Intel claims, and I quote:

	"On the Pentium microprocessor, the worst error case on all
	 transcendental functions is less than 1 ulp when rounding in
	 nearest mode, and less than 1.5 ulps when rounding in other
	 modes."

And realize that this is true in _extended_ precision, with 64 bits of
mantissa.

So don't worry too much about plain doubles. The thing is accurate.

Assuming they don't have a fdiv-like bug ;)

The scatter plots in the manuals etc also show verification for a total
of a claimed 28 million points used for accuracy testing of "fsin", and 4
million used for monotonicity testing. Again, quoting

	"For all cases tested, the actual error was found to lie below the
	 bound obtained by the theoretical error analysis. Figure G1
	 through Figure G-22 are ulp plots that illustrate this
	 characterization information. .."

All typos and errors likely mine.

So I seriously doubt that you'd get better accuracy in the library
routines even if you tried _really_ hard. So I still maintain that the
only thing that a x86-based library routine can do is

 - reduce the (completely worthless) range of |x| > 2**63, where you have
   accuracy only in a very theoretical way.
 - set errno etc

Now, the range |x| > 2**63 may be worthless from an accuracy standpoint
(the difference between adjacent fp numbers is so big that trying to do
sine on consecutive numbers just gives noise), but obviously returning a
number not in the range [-1,1] for sine is to be considered very suspect
indeed. And -ffast-math _will_ do so - never mind that the argument is
arguably completely bogus.

I think that ridiculous arguments can get ridiculous results. Garbage in,
garbage out. But hey, maybe somebody has an algorithm that cares.

		Linus


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]