GCC seems to treat the 32-bit integer constant -0x80000000 (INT_MIN) as an unsigned value, when it should be signed. (I don't think this is a duplicate of bug 25329, since I'm not trying to negate the constant.) For example:
int a = 1;
return -0x80000000 < a;
improperly returns 0, even though -0x80000000 is less than any positive value. From the assembly output, it seems as though GCC is treating -0x80000000 as an unsigned value (using the unsigned "seta" instruction to interpret the comparison result):
movl a, %eax
cmpl $-2147483648, %eax
Changing the comparison to:
return (int)-0x80000000 < a; // Cast to signed
succeeds, as expected (interestingly using "setne" rather than "setg", though certainly either works).
> GCC seems to treat the 32-bit integer constant -0x80000000 (INT_MIN) as an
> unsigned value, when it should be signed.
Incorrect. -0x80000000 is not an integer constant, it's the negation of the
integer constant 0x80000000, which is unsigned (C99 220.127.116.11).
Fair enough, but GCC's documentation explicitly says (gcc.info section 4.5):
* `Whether signed integer types are represented using sign and
magnitude, two's complement, or one's complement, and whether the
extraordinary value is a trap representation or an ordinary value
GCC supports only two's complement integer types, and all bit
patterns are ordinary values.
Given that, I think a typical user would assume (as I have) that GCC would treat -0x80000000 as the signed value -2^31; otherwise there would seem to be no way to write a single constant to represent that valid value. So I'm going to have to argue that this is still a bug, whether in the documentation or in the compiler itself.
> Given that, I think a typical user would assume (as I have) that GCC would
> treat -0x80000000 as the signed value -2^31; otherwise there would seem to be
> no way to write a single constant to represent that valid value.
You're confusing the internal representation, described by the paragraph you
quoted, and the syntax of literals. There is no bug in this case, it's the
well-know limitation of C whereby you need to write INT_MIN as (-INT_MAX - 1).
Subject: Re: -0x80000000 (INT_MIN) erroneously treated as
On Sun, 1 Jun 2008, gcczilla at achurch dot org wrote:
> Fair enough, but GCC's documentation explicitly says (gcc.info section 4.5):
It also explicitly explains, in the section "Incompatibilities", that
-2147483648 is positive, and why. (In C99 mode, 2147483648 becomes of
type long long, but 0x80000000 is still unsigned.)
Thanks for the clarification. (To be honest, I wouldn't have thought to look in the "Incompatibilities" section--I haven't touched a non-ANSI compiler for over a decade--but either way I guess it's my fault for not searching for "2147483648".)
Touched up the summary line a bit to help future searchers.