This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Possible regression from gcc-3.0.x to gcc-3.2.x with std::setprecision


Hi All,

 Has anyone looked at this possible regression from gcc-3.0 to gcc-3.2
to do with the number of digits of precision used when writing out
floats and doubles...?

The original message is in

    http://gcc.gnu.org/ml/libstdc++/2002-12/msg00064.html

Basically, libstdc++ in v3.2 (and above), limits the maximum precision
of a double to 16([1]) digits, whereas the constant

    std::numeric_limits<double>::min()

needs 18 digits in order to write it out to a stream, and read it back
from the stream to make it compare equal....

gcc-3.0 limited the precision to 18([2]) digits....

[1] std::numeric_limits<float, double>::digits10 + 1
[2] std::numeric_limits<float, double>::digits10 + 3

Andrew.
--
 Andrew Pollard, Brooks-PRI Automation  | home: andrew@andypo.net
670 Eskdale Road, Winnersh Triangle, UK | work: Andrew.Pollard@brooks-pri.com
 Tel/Fax:+44 (0)118 9215603 / 9215660   | http://www.andypo.net


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]