[Bug c/61399] LDBL_MAX is incorrect with IBM long double format / overflow issues near large values

vincent-gcc at vinc17 dot net gcc-bugzilla@gcc.gnu.org
Tue Jun 20 19:14:00 GMT 2017


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61399

--- Comment #10 from Vincent Lefèvre <vincent-gcc at vinc17 dot net> ---
(In reply to joseph@codesourcery.com from comment #9)
> That wording defines what "normalized" is, so that values with maximum 
> exponent in this case don't count as normalized because not all values 
> with that exponent in the floating-point model are representable, and 
> defines LDBL_MAX so that it doesn't need to be normalized (and in this 
> case isn't, by that definition of normalized).  The definition of 
> LDBL_MAX_EXP is unchanged; it still just requires 2^(LDBL_MAX_EXP-1) to be 
> representable, without requiring it to be normalized.

This would be pretty useless as a definition. This would mean that emin is in
the "normalized" range (due to the LDBL_MIN_EXP definition), but one doesn't
have any guarantee for the larger exponents.

Thus a type that contains only 0, normalized values of exponent emin,
1+LDBL_EPSILON, and 2^(LDBL_MAX_EXP-1), could be a valid type (with, say, long
double = double = float). Just because you assume that emax has no relations
with normalized values.

Note that the standard doesn't specify a range for the property related to
LDBL_DIG, and this property is obviously incorrect for *any* floating-point
number with q decimal digits. In particular, the property is not satisfied in
general when the target is in the range of the subnormal numbers. So, I don't
expect it to be necessarily satisfied outside the range of the normalized
numbers.


More information about the Gcc-bugs mailing list