__intN patch 3/5: main __int128 -> __intN conversion.

Joseph S. Myers joseph@codesourcery.com
Thu Aug 21 21:38:00 GMT 2014


On Thu, 21 Aug 2014, DJ Delorie wrote:

> > > +      /* This must happen after the backend has a chance to process
> > > +	 command line options, but before the parsers are
> > > +	 initialized.  */
> > > +      for (i = 0; i < NUM_INT_N_ENTS; i ++)
> > > +	if (targetm.scalar_mode_supported_p (int_n_data[i].m)
> > > +	    && ! standard_type_bitsize (int_n_data[i].bitsize)
> > > +	    && int_n_data[i].bitsize <= HOST_BITS_PER_WIDE_INT * 2)
> > > +	  int_n_enabled_p[i] = true;
> > 
> > This HOST_BITS_PER_WIDE_INT * 2 check seems wrong.
> 
> That check was there before, for __int128, I left it as-is.  There is no
> __int128 (currently) if it's bigger then HBPWI*2.

I don't see any corresponding HOST_BITS_PER_WIDE_INT test for __int128 
being removed (and anyway HOST_BITS_PER_WIDE_INT is now always 64, so such 
a test for __int128 would be dead code).

> > All this block of code appears to be new rather than replacing any 
> > existing code doing something similar with __int128.  As such, I think 
> > it's best considered separately from the main __intN support.
> 
> For each __int<N> we need to provide an __INT<N>_MIN__ and
> __INT<N>_MAX__, just like for "char" we provide __CHAR_MIN__ and
> __CHAR_MAX__.

No, those are provided for use by <limits.h>, which only covers standard C 
types (and in particular does not cover __int128).

> > Some of this may be needed for libstdc++, but not all.  As far as I can 
> > tell, the existing __glibcxx_min, __glibcxx_max, __glibcxx_digits, 
> > __glibcxx_digits10, __glibcxx_max_digits10 macros can be used in most 
> > cases and avoid any need to predefine macros for the min or max of __intN; 
> > you only need to communicate which types exist and their sizes in bits 
> > (that is, a single macro predefine for each N, with anything else being 
> > considered separately if otherwise thought desirable).
> 
> I tried that, and wasn't able to get a simpler macro to do it inline
> than the full macro that let gcc figure out the values.  Consider the
> two N of 20 and 128; one is not a multiple of bytes and the other will
> likely stress any runtime math.

If __intN is supported, GCC needs to be able to handle folding arithmetic 
on it, such as the expansion of the existing __glibcxx_max macro.

Maybe you need to refactor __glibcxx_digits so there is a version taking 
the bitsize as an argument rather than using sizeof(T) * __CHAR_BIT__, but 
that should be the only change needed to handle such types with the 
existing macros.  The bitsize macros should be the only ones needing 
predefining to pass information to libstdc++.

-- 
Joseph S. Myers
joseph@codesourcery.com



More information about the Gcc-patches mailing list