[Bug c/97884] INT_MIN falsely expanded to 64 bit

s.bauroth@tu-berlin.de gcc-bugzilla@gcc.gnu.org
Wed Nov 18 16:51:07 GMT 2020


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97884

--- Comment #7 from s.bauroth@tu-berlin.de ---
I do understand that +2147483648 is not an int. I am aware of how the 2s
complement works. It seems to me the reason for INT_MIN being '(-2147483647 -
1)' instead of the mathematically equivalent '-2147483648' is the parser
tokenizing the absolute value of the literal split from the sign of the
literal. I'm also able to imagine why that eases parsing. But if splitting
absolute value and sign - why not treat the absolute value as unsigned? Or
maybe do a check 'in the end' (I have no knowledge of the codebase here...)
whether one can reduce the size of the literal again?
The fact is INT_MIN and '-2147483648' are both integers perfectly representable
in 32 bits. I understand why gcc treats the second one differently (and clang
does too) - I just think it's not right (or expectable for that matter). And if
it's right, gcc should maybe warn about a 32bit literal being expanded to a
larger type - not only in format strings.

> The type of an integer constant is the first of the corresponding list in which its value can be represented.
These kind of sentences make me think gcc's behaviour is wrong. The number can
be represented in 32 bits.


More information about the Gcc-bugs mailing list