This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
RE: typeof and bitfields
- From: "Dave Korn" <dave dot korn at artimi dot com>
- To: "'Ian Lance Taylor'" <ian at airs dot com>,"'Neil Booth'" <neil at daikokuya dot co dot uk>
- Cc: "'Matt Austern'" <austern at apple dot com>,"'Gabriel Dos Reis'" <gdr at integrable-solutions dot net>,<gcc at gcc dot gnu dot org>,"'Andrew Pinski'" <pinskia at physics dot uc dot edu>
- Date: Fri, 14 Jan 2005 15:57:42 -0000
- Subject: RE: typeof and bitfields
> -----Original Message-----
> From: gcc-owner On Behalf Of Ian Lance Taylor
> Sent: 14 January 2005 03:03
> I think the right semantics are for typeof to return the underlying
> type, whatever it is, usually int or unsigned int. Perhaps just
> return make_[un]signed_type on the size of the mode of the bitfield,
> or something along those lines.
>
> If we implement that, and document it, I think it will follow the
> principle of least surprise.
>
> I don't see how giving an error is helpful.
>
> Ian
That seems _really_ wrong to me.
If typeof (x) returns int, then I ought to be able to store INT_MAX in there
and get it back, shouldn't I? Otherwise, why not return typeof(char)==int as
well? They've got the same 'underlying type' too; they differ only in size;
there's no reason to treat bitfields and chars differently.
You could perhaps raise an argument for returning the largest integer type
that is still smaller than the bitfield; i.e. bitfields of 8-15 bits char, 16-31
bits short, 32+bits int (on a 32-bit-int platform; adjust as appropriate for the
target of your preference). But if typeof(x) == typeof (y), and yet x cannot
represent the same domain of values as y, then I'd say typeof was conveying
bogus information.
While we're on the subject, I've always been curious what on earth the meaning
of
struct foo {
int bar : 1;
};
could possibly mean. What are the range of values in a 1-bit signed int? Is
that 1 bit the sign bit or the value field? Can bar hold the values 0 and 1, or
0 and -1, or some other set? (+1 and -1, maybe, or perhaps the only two values
it can hold are +0 and -0?) In a one bit field, the twos-complement operation
degenerates into the identity - how can the concept of signed arithmetic retain
any coherency in this case?
cheers,
DaveK
--
Can't think of a witty .sigline today....