This is the mail archive of the
mailing list for the GCC project.
RE: typeof and bitfields
- From: "Dave Korn" <dave dot korn at artimi dot com>
- To: "'Andreas Schwab'" <schwab at suse dot de>
- Cc: "'Ian Lance Taylor'" <ian at airs dot com>,"'Neil Booth'" <neil at daikokuya dot co dot uk>,"'Matt Austern'" <austern at apple dot com>,"'Gabriel Dos Reis'" <gdr at integrable-solutions dot net>,<gcc at gcc dot gnu dot org>,"'Andrew Pinski'" <pinskia at physics dot uc dot edu>
- Date: Fri, 14 Jan 2005 19:40:40 -0000
- Subject: RE: typeof and bitfields
> -----Original Message-----
> From: Andreas Schwab
> Sent: 14 January 2005 16:36
> Note that it is implementation defined, whether bit-fields of
> type int are signed or unsigned.
Wow. I didn't know that, I thought it was only ever chars that could vary.
And if I had, I would have put an explicit 'signed' qualifier anyway!
> > What are the range of values in a 1-bit signed int? Is that
> 1 bit the
> > sign bit or the value field?
> It's one sign bit and zero value bits.
> > Can bar hold the values 0 and 1, or 0 and -1, or some other set?
> Depends on the representation: with two's complement it's -1
> and 0, with
> sign/magnitude or one's complement it's 0 and -0.
Aha! I knew there was more to life than 2's C!
> > In a one bit field, the twos-complement operation
> degenerates into the
> > identity - how can the concept of signed arithmetic retain
> any coherency
> > in this case?
> It's no different from -INT_MIN: you get an overflow.
Thanks to you and everyone else who replied. It's nice to get a really
definitive answer to something you've always wondered about but never been sure
Can't think of a witty .sigline today....