[Bug c/61399] LDBL_MAX is incorrect with IBM long double format / overflow issues near large values
vincent-gcc at vinc17 dot net
gcc-bugzilla@gcc.gnu.org
Thu Nov 17 15:56:00 GMT 2016
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61399
Vincent Lefèvre <vincent-gcc at vinc17 dot net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |RESOLVED
Resolution|--- |INVALID
--- Comment #4 from Vincent Lefèvre <vincent-gcc at vinc17 dot net> ---
(In reply to Vincent Lefèvre from comment #0)
> By definition, for radix b = 2, LDBL_MAX is the value (1 - 2^(-p)) * 2^emax
> (see §5.2.4.2.2p12), which is the largest value representable in the
> floating-point model.
There has been a defect report and this is no longer the case. See:
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2092.htm
"DR 467 decided that FLT_MAX, DBL_MAX, LDBL_MAX are the maximum representable
finite numbers for their respective types."
So, this solves this issue: LDBL_MAX is correct according to this DR. As a
consequence, I'm resolving the PR as invalid.
More information about the Gcc-bugs
mailing list