This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
real.c implementation
- From: Brad Lucier <lucier at math dot purdue dot edu>
- To: gcc at gcc dot gnu dot org
- Cc: lucier at math dot purdue dot edu (Brad Lucier)
- Date: Thu, 17 Oct 2002 14:01:05 -0500 (EST)
- Subject: real.c implementation
Recently, there has been discussion about the floating-point arithmetic
implementation in real.c. I plan to submit some patches to fix some small
errors soon (possibly this weekend) and adapt Gaston Gonnet's tests
(previously mentioned on this list) for the situation where the target
arithmetic you're testing is not the same as the supported host arithmetic.
However, with Roger Sayle's submission of the sqrt patch, I'd like to
make some general comments about strategy.
There are basically two approaches to implementing proper IEEE arithmetic:
1. Implement algorithms where the working precision is the target
precision plus two bits (guard and sticky) and apply whatever
tricks there are in the literature (Tuckerman's test for 0.5ulp
accuracy of sqrt, Guy Steele's and Will Clinger's algorithms for
exact decimal<->binary conversions) to get the correct answers.
2. Implement somewhat inaccurate algorithms in an intermediate precision
so large that conversion to the target precision gives correct
results.
I believe that real.c adopts the second approach.
One may have several goals here:
A. Implement correct round-to-nearest target precision results.
B. Implement correct directed-rounding (up, down, or toward zero)
target precision results.
Now, strategy 1 works (by definition ;-) for both goals A and B. However,
generally speaking, strategy 2 works only for goal A (i.e., it does not
work with directed roundings), and it works only when the intermediate
precision is "large enough". Large enough means that to get correctly
rounded results to k bits accuracy in the target precision, one must
calculate intermediate results to at least 2k+2 bits accuracy, and this
bound is sharp.
Based on these theoretical results, I expect that the current 160-bit
intermediate precision should be fine for goal A for single, double,
and extended-double IEEE arithmetic, but not for either IBM 106-bit
precision arithmetic or quad-length IEEE arithmetic (wth 113 bits mantissa).
(It is because of the inadequate length of the intermediate precision for
IEEE-quad arithmetic that I suggested to Roger Sayle that for his patch
http://gcc.gnu.org/ml/gcc-patches/2002-10/msg01033.html
he include the target mode so that one can use Tuckerman's test to
get correctly-rounded results in the target precision.
In other words, I was confused, and conflated the two strategies 1 and 2
in order to get around the limited precision of the intermediate
calculations.)
Having said this, Richard's adaptation of paranoia to use real.c passes
in *all* these precisions. I believe, however, that Gonnet's implementation
of the lattice algorithm will find examples (after several hours of
computation, unfortunately) where the current real.c will give incorrectly
rounded results, in either IBM 106-bit precision or IEEE-quad precision.
So, what to do? It depends on answers to the following questions:
(i) Is goal B ever a possibility? If so, we'll have to abandon
strategy 2, any rely solely on strategy 1. This will require
a significant rewrite of real.c
(ii) Should one believe the theorems and increase the intermediate
precision so that results before rounding are accurate to at
lease 2*113+2=228 bits? One would need 256 bits intermediate
accuracy to do this. One could either believe the theorems or
wait for someone like me to adapt Gonnet's tests to come up with
real examples of erroneous results of the current implementation
for, e.g., quad-IEEE arithmetic.
In part (ii), one might have the intermediate precision be machine-dependent,
so only target machines with TFMODE arithmetic (is that right? I mean 128-bit
floating-point numbers) would actually compute intermediate results to
256 bits accuracy, and target machines with only smaller arithmetic could
get by with 160-bit arithmetic.
At any rate, I wanted to bring up these topics while the discussion was
still "warm"; I intend to try to get some results this weekend.
Brad