This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: typeof and bitfields
- From: Gabriel Dos Reis <gdr at integrable-solutions dot net>
- To: "Dave Korn" <dave dot korn at artimi dot com>
- Cc: "'Ian Lance Taylor'" <ian at airs dot com>, "'Neil Booth'" <neil at daikokuya dot co dot uk>, "'Matt Austern'" <austern at apple dot com>, <gcc at gcc dot gnu dot org>, "'Andrew Pinski'" <pinskia at physics dot uc dot edu>
- Date: 14 Jan 2005 17:49:15 +0100
- Subject: Re: typeof and bitfields
- Organization: Integrable Solutions
- References: <NUTMEGZfv4eqfxrCXIj00001066@NUTMEG.CAM.ARTIMI.COM>
"Dave Korn" <dave.korn@artimi.com> writes:
| > -----Original Message-----
| > From: gcc-owner On Behalf Of Ian Lance Taylor
| > Sent: 14 January 2005 03:03
|
| > I think the right semantics are for typeof to return the underlying
| > type, whatever it is, usually int or unsigned int. Perhaps just
| > return make_[un]signed_type on the size of the mode of the bitfield,
| > or something along those lines.
| >
| > If we implement that, and document it, I think it will follow the
| > principle of least surprise.
| >
| > I don't see how giving an error is helpful.
| >
| > Ian
|
| That seems _really_ wrong to me.
|
| If typeof (x) returns int, then I ought to be able to store INT_MAX in there
| and get it back, shouldn't I? Otherwise, why not return typeof(char)==int as
| well? They've got the same 'underlying type' too; they differ only in size;
| there's no reason to treat bitfields and chars differently.
That is an argument for not returning an int. It is not an argument
for issueing error. Why not return int_with_2bits?
-- Gaby