This is the mail archive of the
mailing list for the GCC project.
RE: Alpha Linux gcc bug
- To: "'Arun Sharma'" <asharma at netscape dot com>
- Subject: RE: Alpha Linux gcc bug
- From: Kaz Kylheku <kaz at cafe dot net>
- Date: Wed, 18 Mar 1998 15:34:02 -0800
- Cc: "egcs-bugs at cygnus dot com" <egcs-bugs at cygnus dot com>
On Wednesday, March 18, 1998 1:12 PM, Arun Sharma [SMTP:email@example.com]
> The following message is a courtesy copy of an article
> that has been posted to gnu.gcc.bug as well.
> The following program core dumps, when converting the double to an
> int. Can someone explain why ?
Yes; your language reference manual. First of all, accessing
a union through a member of one type using a member of
another type leads to an implementation-defined
Secondly, the conversion of a floating point value to an integral
type leads to undefined behavior if the integral type's range
isn't large enough to contain the floating point value (truncated
to an integer).
> typedef unsigned int uint32;
Typedefing unsigned int to uint32 does not guarantee that
it shall be 32 bits wide. You are fooling yourself.
> #define DOUBLE_HI32_EXPMASK 0x7ff00000
> #define DOUBLE_HI32_MANTMASK 0x000fffff
This sure looks like you are constructing an IEEE
double precision value whose binary exponent is
Do you honestly think that such an astronomical
value can be converted to a value of type int without triggering
some sort of exception in a reasonably designed
By the way, on a little-endian machine, you would
have to swap the byte order of the masks, not
just interchange the order of the two 32 bit
Why are you bothering the EGCS bugs list with this?
The bug is in your understanding of the programming language,
not in the compiler.