This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: enable maximum integer type to be 128 bits
- From: "Joseph S. Myers" <jsm at polyomino dot org dot uk>
- To: Paolo Bonzini <bonzini at gnu dot org>
- Cc: gcc-patches at gcc dot gnu dot org
- Date: Mon, 12 Jul 2004 08:40:30 +0000 (UTC)
- Subject: Re: enable maximum integer type to be 128 bits
- References: <s0f24ef7.026@emea1-mh.id2.novell.com> <40F24733.5060202@gnu.org>
On Mon, 12 Jul 2004, Paolo Bonzini wrote:
> Why? The biggest integer type is almost always long long, and that's
> the definition of intmax_t.
Not in glibc, on 64-bit platforms; it uses long.
/* Largest integral types. */
#if __WORDSIZE == 64
typedef long int intmax_t;
typedef unsigned long int uintmax_t;
#else
__extension__
typedef long long int intmax_t;
__extension__
typedef unsigned long long int uintmax_t;
#endif
Using long in preference to long long for 64-bit types when both are
64-bit is quite natural for C90 compatibility, and once the decision has
been made between types of the same precision mangling means it becomes
part of the C++ ABI and so is difficult to change.
> We've had ten years to transition from long to size_t/ptrdiff_t whenever
> it was appropriate. Now we have a standard which is in flux (and its
But the printf formats %zu / %td were only added in C99 and it took a
while more for libraries to support them. So people printing size_t would
traditionally cast to unsigned long and print with %lu. That the
guarantee that doing so would work was broken was a serious mistake in
C99. This was one of the UK objections to C99
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n883.htm> (the second
issue listed was fixed in TC1, which added a rather inadequate
"Recommended Practice" for the first).
--
Joseph S. Myers http://www.srcf.ucam.org/~jsm28/gcc/
jsm@polyomino.org.uk (personal mail)
jsm28@gcc.gnu.org (Bugzilla assignments and CCs)