enable maximum integer type to be 128 bits

Joseph S. Myers jsm@polyomino.org.uk
Mon Jul 12 12:06:00 GMT 2004


On Mon, 12 Jul 2004, Paolo Bonzini wrote:

> Why?  The biggest integer type is almost always long long, and that's
> the definition of intmax_t.

Not in glibc, on 64-bit platforms; it uses long.

/* Largest integral types.  */
#if __WORDSIZE == 64
typedef long int                intmax_t;
typedef unsigned long int       uintmax_t;
#else
__extension__
typedef long long int           intmax_t;
__extension__
typedef unsigned long long int  uintmax_t;
#endif

Using long in preference to long long for 64-bit types when both are
64-bit is quite natural for C90 compatibility, and once the decision has
been made between types of the same precision mangling means it becomes
part of the C++ ABI and so is difficult to change.

> We've had ten years to transition from long to size_t/ptrdiff_t whenever 
> it was appropriate.  Now we have a standard which is in flux (and its 

But the printf formats %zu / %td were only added in C99 and it took a
while more for libraries to support them.  So people printing size_t would
traditionally cast to unsigned long and print with %lu.  That the
guarantee that doing so would work was broken was a serious mistake in
C99.  This was one of the UK objections to C99
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n883.htm> (the second
issue listed was fixed in TC1, which added a rather inadequate
"Recommended Practice" for the first).

-- 
Joseph S. Myers               http://www.srcf.ucam.org/~jsm28/gcc/
    jsm@polyomino.org.uk (personal mail)
    jsm28@gcc.gnu.org (Bugzilla assignments and CCs)



More information about the Gcc-patches mailing list