This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Two suggestions for gcc C compiler to extend C language (by WD Smith)


On 29/07/16 20:54, Warren D Smith wrote:
> On 7/29/16, Jonathan Wakely <jwakely.gcc@gmail.com> wrote:
>> Let's imagine we have a 4-bit type, called nibble.
>>
>> sizeof(nibble) == 1, because you can't have an object with a smaller size.
>>
>> nibble a[2];
>> sizeof(a) == 1;
>>
>> Because otherwise there isn't much benefit.
> 
> --bitsizeof() is required.

That would not change how "sizeof" works.

> 
>> So now we have a type which violates one of the core rules of the type
>> system. sizeof(nibble[2 * N]) != 2 * sizeof(nibble[N])
> 
> --by the way, it is spelled "nybble."  Just like 8-bit objects
> are not called a "bite." If you are reading "bite magazine" you are probably
> in a different area.  And the reason that commonly-used words
> like nybble and byte were coined, is these are common concepts, in common use.
> Not an ultra-rare concept hardly ever used, like
> some people have utterly ridiculously asserted.

Actually, it is almost always spelt "nibble".  Some people do spell it
"nybble", but "nibble" is the most common.

> 
>> That means any generic code that works with arbitrary types doesn't
>> work with this type.
> 
> --But C does not support generic code per se.
> 
>> This also breaks various idioms:
>>
>> nibble b[3];
>> for(int i=0; i < sizeof(b)/sizeof(b[0]); ++i)
>>     ...
>>
>> This loop doesn't visit all the elements.
> 
> --if you want to use stupid idioms, you will have bugs.
> If you wanted to be smart about it, you use some sort of bitsizeof() which
> should have been what C provided in the first place.
> 

So to be "smart", we should use features that C does not have, but which
you think it should have had?  All these people writing correct, working
code are "stupid", when in fact we should have been "smart" enough to
write code that does not make sense in C and could not compiler?

>> Given a pointer to an array of nibbles and a length, how do I iterate
>> through the array?
> 
> for(i=0; i<bitsizeof(a); i++){ s += a[i]; }
> 

Even if it were possible to have the packed nibble array "a", and there
were a "bitsizeof" operator, then that still would not be correct.

> 
>> void do_something(nibble* p);
>> int sum (nibble* p, size_t n) {
>>     while (n)
>>         do_something(p+n);
>> }
>>
>> Pointer arithmetic won't work, because you can't address half a byte.
>> What is the address of p[n-1] ? You can't know, without knowing if p
>> points to the first or second nibble in a byte.
> 
> --pointers would either have to be forbidden for subbyte objects, or
> would have to be implemented in a way involving, e.g., a shift if necessary.
> For example, the 7th bit inside a byte -- well 3 extra bits of
> address are needed to know which bit it is, so which byte is address>>3,
> and which bit is address&7.
> 
> By the way, on some machines I think this already happens, not to
> address bits and nybbles, but to address bytes.  Did anybody moan that
> on such machines, "bytes are unaddressable"?  And therefore unusable
> in C?  

Accessing a "byte" is always easy in C, because that's the definition of
a "byte" - I assume you mean "octet".

Yes, people who use machines with CHAR_BIT == 16 (or anything other than
8) /do/ moan because they can't access octets easily.  I know I
certainly moaned when programming on such a processor.  There is no
octet type, or uint8_t, on such systems.  You can't make arrays of 8-bit
elements, and you can't take their addresses or directly address them.
You have to use bitfields, or shifts and masks.  This applies to all C
compilers on such devices (as far as I know, gcc does not support any
CHAR_BIT == 16 systems).

> If and when anybody ever moaned that, then people like you told
> them they were idiots, and you implemented the right tuff to do it
> inside GCC.  Undoubtably therefore, the needed code for my suggestion
> already is inside GCC, merely with a few numbers changed.

They moan about it - but they live with it, because that's the way C
works, and that's the processor they are dealing with.  And they write
access functions or bitfields as needed to get the job done.  They don't
demand that their compiler supplier does something magical.

> 
>> C does not naturally work with objects smaller than a byte (which
>> means the smallest addressable memory location, and is at least 8 bits
>> but might be more). They can exist as bit fields inside other objects,
>> but not in isolation.
> 
> --other languages like Pascal managed to address these "unaddressable" objects,
> to use the bogus word one of my critics branded them with.

No, Pascal cannot address items smaller than a byte.  Pascal provides
abstractions in its array operators that let you work with packed arrays
- that is because Pascal's array operator is a higher level than C's
array operator, and has more features.

> 
>> To use such types it's is much cleaner to define an abstraction layer
>> that does the packing, masking, extracting etc. for you. Either a set
>> of functions that work with a pointer to some buffer, into which 4-bit
>> values are stored, or a C++ class that gives the appearance of a
>> packed array of 4-bit types but is implemented as a buffer of bytes.
>> But I suggested that right at the start of the thread and you
>> dismissed it (incorrectly saying it required C++, which is nonsense).
>>
>> // A double nibble
>> struct nibnib {
>>   signed char n1 : 4;
>>   signed char n2 : 4;
>> };
>> inline signed char nib_get1(const nibnib* n) { return n->n1; }
>> inline signed char nib_get2(const nibnib* n) { return n->n2; }
>> inline void nib_set1(nibnib* n, signed char v) { n->n1 = v; }
>> inline void nib_set2(nibnib* n, signed char v) { n->n2 = v; }
>>
>> Now this works naturally with the C object model. An array of "double
>> nibbles" is addressable and its size follows the usual rules, and it's
>> packed efficiently (optimally if CHAR_BIT == 8). There's no reason
>> this would be any less efficient than if it was supported natively by
>> the compiler, because it's going to have to do all the same operations
>> whether they are done through an API like this or implicitly by the
>> compiler. Using the API avoids having a weird type that doesn't obey
>> the rules of the language.
> 
> --sigh.  It can certainly be done "manually" in ways like you here suggest.
> Then you get ugly code that looks way different from the same code for bytes.
> Then I have to write two programs not one program, basically, the two
> appearing quite different, to do the same stuff for bytes and nybbles.
> That makes it way more likely
> I create a bug, and makes it unappetizing to read and check the code(s).

Learn a little about programming and making abstractions.  It is not hard.

> 
> 
>> Alternatively, make a proposal to add this to ISO C, then if it gets
>> standardised GCC will support it. We don't just as non-standard
>> extensions because of one pushy person sending incredibly long emails.
> 
> The way ISO C seems to happen, is some compilers implement the idea as
> C extensions, then they see the light that it was a good idea and is
> popular; then hence make it a standard with a few names changed.

That has certainly happened, but it is not the only way ISO C has
changed.  But as ISO C is considered a stable language now, it takes a
substantial level of justification to make non-trivial changes.  There
needs to be a strong level of demand for the feature, clear benefits
(and avoiding "ugly" code or making the programmer's job easier is not
good enough), a solid presentation showing how the change would not
cause problems or conflicts with existing code, and it also needs a
reference implementation in a major compiler.

And as ISO C is considered a stable language, C compiler developers (not
just gcc) will not add new language features unless there is clear
benefits, and at least a solid plan for it to become part of the next C
standard.

You can expect no serious language changes in future C standards.
Certainly the kind of change you are asking for with support for nibbles
is completely out of the question (and please, stop claiming it is
simple, obvious or trivial - accept the word of people who know what
they are talking about).  You can expect to see changes or additions to
the C standard library, and you might see a few minor features in the
library - these will almost certainly come from C++ rather than as
independent C features.

And in gcc C, you can expect to see more builtins or attributes that
improve performance or error checking, but you are unlikely to see any
new language features except occasional "backports" from C++.

> 
> The reason my emails are so incredibly long is, I keep on having
> utterly obvious truths disputed by people who ought to know better,
> and have to go back to basics to demonstrate their validity.
> It would be simpler if the utterly obvious truths I state, were just accepted as
> utterly obvious truths.  Then there would have been a short single email.
> 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]