enable maximum integer type to be 128 bits

Paolo Bonzini bonzini@gnu.org
Mon Jul 12 08:52:00 GMT 2004


> ILP32,LL64 (with sizeof(intmax_t) == sizeof(long)) is just fine.

Why?  The biggest integer type is almost always long long, and that's 
the definition of intmax_t.

I think that C90's guarantees about long were only justified by the lack 
of stdint.h/inttypes.h; it makes no sense now.

We've had ten years to transition from long to size_t/ptrdiff_t whenever 
it was appropriate.  Now we have a standard which is in flux (and its 
brokenness is made manifest by the contradiction inherent in the 
introduction of long long), but we have ten more years to transition to 
intptr_t.

As far as I am concerned, transitioning GNU Smalltalk to use intptr_t 
made the code a lot clearer (GNU Smalltalk uses them a lot because it 
has tagged integers which are as wide as pointers).

Standards evolve.  The C++ committee caused big discussions when 
introducing new rules for "for" statement scoping -- and that was a real 
semantic change that could break existing programs, not simply a 
contradiction between two parts of the standard.

> However, it's not a major
> problem, primarily because no one uses intmax_t

Why not?  Not much, or not yet.  For example I used it for 
multiplication as in

#ifdef SIZEOF_INTMAX_T >= 2 * SIZEOF_INTPTR_T
   result = (intptr_t) ((intmax_t) a * (intmax_t) b);
   if (result < (((intmax_t) -1) << SIZEOF_INTPTR_T)
       || result < (((intmax_t) 1) << SIZEOF_INTPTR_T) - 1)
     return OVERFLOW;
   else
     return result;
#else
   result = a * b;
   if (b == 0 || result / b == a)
     return result;
   else
     return OVERFLOW;
#endif

I could have used "long long", but if Jan's suggestion of having a 
128-bit intmax_t will go in, this one will pick a faster algorithm.

Paolo




More information about the Gcc-patches mailing list