Bug 14969 - num_put incorrectly limits output precision
Summary: num_put incorrectly limits output precision
Status: RESOLVED DUPLICATE of bug 14220
Alias: None
Product: gcc
Classification: Unclassified
Component: libstdc++ (show other bugs)
Version: 4.0.0
: P2 normal
Target Milestone: ---
Assignee: Not yet assigned to anyone
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2004-04-15 19:22 UTC by Matt Austern
Modified: 2005-07-23 22:49 UTC (History)
1 user (show)

See Also:
Host: powerpc-apple-darwin7.3.0
Target: powerpc-apple-darwin7.3.0
Build: powerpc-apple-darwin7.3.0
Known to work:
Known to fail:
Last reconfirmed:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Matt Austern 2004-04-15 19:22:54 UTC
Consider the following test case:
#include <iostream>
#include <iomanip>

int main()
{
  double pi = 3.14159;
  std::cout << std::setprecision(40) << pi << std::endl;
}

Compiling and running this program gives the output:
3.1415899999999999  We've got 16 digits after the 
decimal point, even though we've requested 40.

There's code in locale_facets.tcc that limits the precision
if it's "out of range".  I don't see any justification for
doing that either in clause 22.2.2.2.2 of the C++ 
standard, which defines the behavior in terms of printf,
or in clause 7.19.6.1 of the C standard, which describes
printf itself.  (FWIW the C standard does say that there is 
an implementation defined upper limit for printf 
conversions, but it is required to be at least 4095.)
Comment 1 Paolo Carlini 2004-04-15 19:39:31 UTC

*** This bug has been marked as a duplicate of 14220 ***