This is the mail archive of the
mailing list for the libstdc++ project.
Re: [patch] change specific int128 -> generic intN
- From: DJ Delorie <dj at redhat dot com>
- To: libstdc++ at gcc dot gnu dot org
- Cc: libstdc++ at gcc dot gnu dot org, gcc-patches at gcc dot gnu dot org
- Date: Fri, 9 May 2014 14:29:14 -0400
- Subject: Re: [patch] change specific int128 -> generic intN
- Authentication-results: sourceware.org; auth=none
- References: <201404142303 dot s3EN3ONP009938 at greed dot delorie dot com> <201405082334 dot s48NYZni001625 at greed dot delorie dot com> <alpine dot DEB dot 2 dot 10 dot 1405090211170 dot 3709 at laptop-mg dot saclay dot inria dot fr> <201405090221 dot s492LjKD005860 at greed dot delorie dot com> <alpine dot DEB dot 2 dot 10 dot 1405090907070 dot 3684 at laptop-mg dot saclay dot inria dot fr>
> Well, it wasn't a hard requirement, it is just that the library has
> to use a more complicated way to get the precision (use (unsigned
> TYPE)(-1) to get the unsigned max and compute the precision from
> that, probably).
We could define macros for the precision too, and we already know max
and min values as macros, it's "just a matter of" exporting that info
to the C++ headers somehow.
> > Would it be acceptable for the compiler to always define a set of
> > macros for each of the intN types?
> What set of macros do you have in mind?
In general, I meant. They'd be predefined for pretty much every
compile, not just the C++ headers.