This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Re: Integral conversions in C/C++


On Sun 20/04/08  7:05 PM , Christian BÃhme monodhs@gmx.de sent:
> Actually, the notion of an ``unsigned integer'' as it is known in
> the C/C++ world is already an oxymoron.  And it grows from there.
> Don't get me started on the float/double nightmare.

Actually, no it's not.  unsigned just means a subset.  You could just as easily
stated prime integers as a subset of the integers (well I guess natural numbers,
which is itself a subset of the integers).

You're lack of understanding notwithstanding, the concepts of signed and unsigned
are well understood by most experienced developers.

> > And the promotion applies to the operator
> 
> Reiterating a statement without proof does not make said statement
> valid.  Show me the section(s) in the standard(s) that limit integral
> conversions to assignment statements as you claim.

It's not my job to quote you chapter and verse.  It's quite easy to prove this is
how promotions work (hint, my previous example with the four integer types shows
it quite well).

There are those who have problems with data types (you), and those who do not
(me, the rest of the list apparently).  

> > The promotions occur (in time) with respect to the
> order of precedence. 
> > If they occur out of order you end up with nastyness
> (like the float example I
> > sent earlier).
> 
> At which point in the parsing process of the original example is
> there actually an operator precedence resolution involved ?  Care
> to elaborate a bit on that ?

the statement

a = -(b * 10U); 

Can be read as 

- 10U
- b
  - multiply (both are unsigned so no promotion required)
  - negate
  - promote the unsigned result to 64-bit signed (via zero extension)
  - store

That's the order of precedence (I'd do a nice ASCII diagram, but you're not worth
the effort).

> >> given the original expression, a = 4294967280UL is a
> colossal
> >> screw-up which I actually expected the compiler to
> warn me about.
> > 
> 
> > Why?
> 
> The missing warning, the screw-up or both ?

It's not invalid, it's a design fault. 

Again, with CHAR_BIT = 8, what should

unsigned char b = 255;
int a = 3;
a += b;

this produce?  By your logic, it should promote b to an int via sign extension. 
Hint, how is 255 any diff from (unsigned char)-1 ?

> >     You negated an unsigned expression.
> Nope.  I multiplied a natural number by -1 expecting to produce
> an integer lange enough to hold the value of a natural number
> which itself should be large enough to hold the result of a
> multiplication of one natural number variable of limited range
> and a relatively small natural number constant.  Clean a priori
> information that is available to the compiler.  The C/C++
> backwardness, however, maimes it into something that probably
> was the norm in the 70ies but is unacceptable in the 2000s.
> At least in my book.  But then again, it's only PeeCees in the
> computing world even on the MPAs nowadays so why bother.

Your problem is you think the compiler does type promotion out of the order of
precedence.  This breaks so much logic.

For example, consider

unsigned a = 60000;
unsigned char b = 255, c;
c = a / b;

By your logic, we should convert everything to unsigned char first?  Or should we
promote b to unsigned, then divide, then convert to unsigned char to store it?

The conversion to the lvalue type occurs when we reach that operator in the tree
of expressions that make up the statement.

> > unsigned char b = 255;
> > int a = 4;
> > a += b;
> > Should the result be 3?
> 
> Define the ranges of ``unsigned char'' and ``int'' in your example
> and I can give you a reasonable answer.

Stop trolling, you're just evading my clear cut counter-example of your problem
statement.  

> int64_t a;
> uint32_t b = 0xdeadbeef;
> 
> a = -(((int64_t )b) * 10u);
> 
> then on a 32 bit machine 10u is promoted to int64_t (which is large
> enough to hold the value) and a 64
> performed.  Or isn't it ?  Now, if there was a machine that actually
> implemented a 64
> would the result in a of the original example look like if the code
> was to be translated according to the standard for this particular
> machine ?  Could this multiplication _ever_ be used if the standards
> were followed blindly ?

Well given that the type of the expression (int64_t)b is a signed 64-bit integer,
the 10u is promoted to it as well [via zero extension], then negated.  So you
will get a negative result equal to -0x8B2C97556ULL.  

> Wrong on so many levels.  -1 is neither the value after a promotion
> nor conversion.  If you indeed understood my logic, then you would
> have souped up a conversion example which the above is not.

Um, if you stored 255 into a signed char where CHAR_BIT == 8, then you'd end up
with a negative value of 1 (on a twos compliment machine).  If you then added
this signed char to another signed type, you'd be subtracting one.

> I was probably being too optimistic and should have been more
> specific: If you stick to C you never will.  C++ is sloooowly
> beginning to move into the right direction.  Too many Cisms and
> inconsistencies in it that make coding a tedious task but still
> preferable over C and Fortran.  We'll see how far they will have
> come within five years from now.

I've been a professional developer for about 8 years now.  I've been a hobbyist
for several years more.  Out of all the problems I find myself in, figuring out
the C language is not it.  Not that I don't have bugs in my code from time to
time, but not due to any misunderstanding of the language.  If anything, I
wrestle more with inconsistent APIs (re: Linux kernel changing from each point
release...) than anything else.

You're problem seems to be there is the way you want things to be, and the way
things are, and you just can't get here from there.  As another poster pointed
out, avoid C.

> Right.  Reals, integers, rational and natural numbers.  It's all the
> same thing, really.  And because it's all the same thing, really, there
> are actually _separate_ clauses for floating point and integer
> conversion cases in the C++ standard.  I am with you so far.

The same rules of type promotion and conversion apply w.r.t. order at least.  

char = float / int;

Will result in the int being promoted to float, the floating point division
occurring, and the result being stored as an integral type in the char.

That is no different than

char = long / short;

or 

int = -(unsigned);

> Define ``similar''.  And, please, prove it with pointers to the
> standards.

Do.  Your.  Own.  Homework.

> Since I have only access to various GCC incarnations nowadays,
> it's impossible for me to (dis-)prove that point.  Can you ?

In my spare time I wrote libraries which were used on everything from UNIX boxes
like HPUX, IRIX, etc, to every type of Linux and BSD, to MS-DOS, to windows, and
to all sorts of proprietary embedded systems.  Nowhere.  Not even remotely close
to anything like this has crept up in my code.  Not to say there haven't been
porting issues [where I skirt non-portability and what not], but basic C syntax
has not been one of my faults in a really long time.

> > And just because you're resilient to new information
> 
> Once again: There's been no news so far.

I've shown you several counter-examples where your logic proves illogical. 
Accept it.

> There is no mess as far as my code is concerned as I adhere to the
> standards as backward/retarded/perverted the rules may be.  Any well
> trained mathematician would, however, think otherwise.  That is the
> view I was expressing and the kind of out-of-the-box thinking that
> appears to baffle you.  Can't help you with that.

Well, I'd hardly say C compares to something PARI, MAPLE, SAGE or whatever.  C is
a low level programming language meant to do bit twiddling and byte shuffling. 
If you want to script mathematical algorithms, proofs, or whatever, stick to one
of those languages.

As far as C is concerned, it has a well documented and understood [by most] way
of interpreting expressions that yields logical and consistent results.

You're just upset because your code has bugs, and you feel the need to vent.  And
instead of being the big man and admitting you're not correct, you'll just drag
this on because SOMEONE IS WRONG ON THE INTERNET AND I MUST PROVE THEM SO!!!

Get a grip.

Again, those who can write software without type errors (me, and the rest of the
gang), and people who can't (you).

Tom


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]