This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: typeof and bitfields


"Dave Korn" <dave.korn@artimi.com> writes:

> > I think the right semantics are for typeof to return the underlying
> > type, whatever it is, usually int or unsigned int.  Perhaps just
> > return make_[un]signed_type on the size of the mode of the bitfield,
> > or something along those lines.
> > 
> > If we implement that, and document it, I think it will follow the
> > principle of least surprise.
> > 
> > I don't see how giving an error is helpful.
> > 
> > Ian
> 
>   That seems _really_ wrong to me.
> 
>   If typeof (x) returns int, then I ought to be able to store INT_MAX in there
> and get it back, shouldn't I?  Otherwise, why not return typeof(char)==int as
> well?  They've got the same 'underlying type' too; they differ only in size;
> there's no reason to treat bitfields and chars differently.

In principle, perhaps.  In practice, in C, types are not first-class
objects.  There is a very limited set of operations you can do with
the result of typeof.  In fact, the only useful thing you can do with
it is use it to declare a variable or use it in a typecast.  If we
simply define typeof as returning a type which is large enough to hold
any value which can be put into the argument of the typeof, then I
think we are consistent and coherent.  Yes, it is true that you will
be able to store values into a variable declared using the result of
typeof which you can not then store back into the variable which was
the argument of typeof.  That might be a problem in principle, but I
don't see why it will be a problem in practice.

The reason to support typeof in this way is to make cases like the
example in the gcc manual work correctly.

     #define max(a,b) \
       ({ typeof (a) _a = (a); \
           typeof (b) _b = (b); \
         _a > _b ? _a : _b; })


>   While we're on the subject, I've always been curious what on earth the meaning
> of 
> 
> struct foo {
>    int   bar : 1;
> };
> 
> could possibly mean.  What are the range of values in a 1-bit signed int?  Is
> that 1 bit the sign bit or the value field?  Can bar hold the values 0 and 1, or
> 0 and -1, or some other set?  (+1 and -1, maybe, or perhaps the only two values
> it can hold are +0 and -0?)  In a one bit field, the twos-complement operation
> degenerates into the identity - how can the concept of signed arithmetic retain
> any coherency in this case?

It holds the set of values {0, -1}.  This is no different from the
fact that -INT_MIN is itself INT_MIN.  Signed arithmetic in a
twos-complement representation is inherently incoherent, at least when
compared to arithmetic in the integer field.

Ian


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]