This is the mail archive of the
libstdc++@gcc.gnu.org
mailing list for the libstdc++ project.
Re: [patch] change specific int128 -> generic intN
- From: Marc Glisse <marc dot glisse at inria dot fr>
- To: DJ Delorie <dj at redhat dot com>
- Cc: libstdc++ at gcc dot gnu dot org, gcc-patches at gcc dot gnu dot org
- Date: Fri, 9 May 2014 09:59:29 +0200 (CEST)
- Subject: Re: [patch] change specific int128 -> generic intN
- Authentication-results: sourceware.org; auth=none
- References: <201404142303 dot s3EN3ONP009938 at greed dot delorie dot com> <201405082334 dot s48NYZni001625 at greed dot delorie dot com> <alpine dot DEB dot 2 dot 10 dot 1405090211170 dot 3709 at laptop-mg dot saclay dot inria dot fr> <201405090221 dot s492LjKD005860 at greed dot delorie dot com>
- Reply-to: libstdc++ at gcc dot gnu dot org
On Thu, 8 May 2014, DJ Delorie wrote:
Assuming that the formula sizeof(type)*char_bit==precision works for all
It doesn't. THe MSP430 has __int20 for example.
Well, it wasn't a hard requirement, it is just that the library has to
use a more complicated way to get the precision (use (unsigned TYPE)(-1)
to get the unsigned max and compute the precision from that, probably).
Would it be acceptable for the compiler to always define a set of
macros for each of the intN types?
What set of macros do you have in mind?
I would have thought that would be discouraged,
If we can't think of another way...
--
Marc Glisse