This is the mail archive of the
mailing list for the GCC project.
Re: surprising precision
- From: "John S. Fine" <johnsfine at verizon dot net>
- To: Jonathan <jsealy at cwgsy dot net>
- Cc: gcc-help at gcc dot gnu dot org
- Date: Wed, 03 Mar 2010 16:52:55 -0500
- Subject: Re: surprising precision
- References: <D5846594-AE6E-4BCD-B81E-6D8DB68EB780@cwgsy.net>
Part of your confusion is over the difference between "precision" and
A floating point format has a FINITE number of specific values that can
be stored with infinite accuracy. You happen to have chosen values that
are in that finite set.
You can set the precision of your output to whatever (finite value) you
like. It may be much more than the accuracy of your number. It may be
much less than the accuracy of your number (especially if the accuracy
of your number is infinite).
Even if the accuracy of a stored floating point number is infinite, it
may be possible for the algorithm that converts it from binary to
decimal to introduce some inaccuracy. I'm not sure of those details.
Your results seem to indicate that translation is done surprisingly well.
To help you understand, instead of outputting just d each time, output
both d and d+1. At some point d will still be 100% accurate but d+1
will not be accurate, in fact it will be exactly equal to d.
Please have a look at my query posted at the following 3 forums.
It has not received any explanations yet.
On the one hand it appears not to be a problem but on the other hand
could be a spectacular bug.