This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Two suggestions for gcc C compiler to extend C language (by WD Smith)


Let's imagine we have a 4-bit type, called nibble.

sizeof(nibble) == 1, because you can't have an object with a smaller size.

nibble a[2];
sizeof(a) == 1;

Because otherwise there isn't much benefit.

So now we have a type which violates one of the core rules of the type
system. sizeof(nibble[2 * N]) != 2 * sizeof(nibble[N])

That means any generic code that works with arbitrary types doesn't
work with this type.

This also breaks various idioms:

nibble b[3];
for(int i=0; i < sizeof(b)/sizeof(b[0]); ++i)
    ...

This loop doesn't visit all the elements.

Given a pointer to an array of nibbles and a length, how do I iterate
through the array?

void do_something(nibble* p);
int sum (nibble* p, size_t n) {
    while (n)
        do_something(p+n);
}

Pointer arithmetic won't work, because you can't address half a byte.
What is the address of p[n-1] ? You can't know, without knowing if p
points to the first or second nibble in a byte.

C does not naturally work with objects smaller than a byte (which
means the smallest addressable memory location, and is at least 8 bits
but might be more). They can exist as bit fields inside other objects,
but not in isolation.

To use such types it's is much cleaner to define an abstraction layer
that does the packing, masking, extracting etc. for you. Either a set
of functions that work with a pointer to some buffer, into which 4-bit
values are stored, or a C++ class that gives the appearance of a
packed array of 4-bit types but is implemented as a buffer of bytes.
But I suggested that right at the start of the thread and you
dismissed it (incorrectly saying it required C++, which is nonsense).

// A double nibble
struct nibnib {
  signed char n1 : 4;
  signed char n2 : 4;
};
inline signed char nib_get1(const nibnib* n) { return n->n1; }
inline signed char nib_get2(const nibnib* n) { return n->n2; }
inline void nib_set1(nibnib* n, signed char v) { n->n1 = v; }
inline void nib_set2(nibnib* n, signed char v) { n->n2 = v; }

Now this works naturally with the C object model. An array of "double
nibbles" is addressable and its size follows the usual rules, and it's
packed efficiently (optimally if CHAR_BIT == 8). There's no reason
this would be any less efficient than if it was supported natively by
the compiler, because it's going to have to do all the same operations
whether they are done through an API like this or implicitly by the
compiler. Using the API avoids having a weird type that doesn't obey
the rules of the language.


If it's as simple to implement as you think, and the unsuitability for
the C object model isn't a problem, maybe you should implement it
yourself, or pay someone to do it. Demanding that other people do it
for you when they clearly think it won't work is not going to succeed.

Alternatively, make a proposal to add this to ISO C, then if it gets
standardised GCC will support it. We don't just as non-standard
extensions because of one pushy person sending incredibly long emails.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]