This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug c++/51559] decimal128 operates incorrectly compared to decimal32 and decimal64


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51559

Ganton <kubry at terra dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |kubry at terra dot com

--- Comment #3 from Ganton <kubry at terra dot com> ---
> The problem is inherent to using floating points to initialize decimal128

Instead of: 
 - Using a floating point to initialize a decimal128.
 - Dividing integers to initialize a decimal128.
people can use a decimal number to initialize a decimal128. 

In the prior examples, instead of
   std::decimal::decimal128 dn(.3), dn2(.099), dn3(1000), dn4(201);
people can use
   std::decimal::decimal128 dn(.3dl), dn2(.099dl), dn3(1000), dn4(201);


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]