This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug c/18666] Conversion of floating point into bit-fields
- From: "joseph at codesourcery dot com" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: 25 Nov 2004 00:52:26 -0000
- Subject: [Bug c/18666] Conversion of floating point into bit-fields
- References: <20041125003250.18666.jakub@gcc.gnu.org>
- Reply-to: gcc-bugzilla at gcc dot gnu dot org
------- Additional Comments From joseph at codesourcery dot com 2004-11-25 00:52 -------
Subject: Re: New: Conversion of floating point into bit-fields
On Thu, 25 Nov 2004, jakub at gcc dot gnu dot org wrote:
> a valid test or not? This worked with 3.4.x and earlier, but doesn't any
> longer. The question is mainly if the type of a.i for the 6.3.1.4/1 purposes
> is unsigned int (in this case it would be well-defined, 16 is representable
> in unsigned int and storing 16 into unsigned int i : 1 bitfield is defined),
> or if the type is integer type with precision 1.
There are at least three DRs affirming that the type is unsigned:1, i.e. a
type with precision 1.
--
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18666