This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug c/18666] Conversion of floating point into bit-fields


------- Additional Comments From joseph at codesourcery dot com  2004-11-25 00:52 -------
Subject: Re:  New: Conversion of floating point into bit-fields

On Thu, 25 Nov 2004, jakub at gcc dot gnu dot org wrote:

> a valid test or not?  This worked with 3.4.x and earlier, but doesn't any
> longer.  The question is mainly if the type of a.i for the 6.3.1.4/1 purposes
> is unsigned int (in this case it would be well-defined, 16 is representable
> in unsigned int and storing 16 into unsigned int i : 1 bitfield is defined),
> or if the type is integer type with precision 1.

There are at least three DRs affirming that the type is unsigned:1, i.e. a 
type with precision 1.



-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18666


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]