This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [patch 1/4] change specific int128 -> generic intN
- From: "Joseph S. Myers" <joseph at codesourcery dot com>
- To: DJ Delorie <dj at redhat dot com>
- Cc: Eric Botcazou <ebotcazou at adacore dot com>, <gcc-patches at gcc dot gnu dot org>
- Date: Fri, 27 Jun 2014 21:38:38 +0000
- Subject: Re: [patch 1/4] change specific int128 -> generic intN
- Authentication-results: sourceware.org; auth=none
- References: <201404142303 dot s3EN3ONP009938 at greed dot delorie dot com> <Pine dot LNX dot 4 dot 64 dot 1406212014500 dot 29257 at digraph dot polyomino dot org dot uk> <201406242332 dot s5ONWOZn012836 at greed dot delorie dot com> <2471914 dot BhTlkUd4df at polaris> <201406272104 dot s5RL4nKa029229 at greed dot delorie dot com>
On Fri, 27 Jun 2014, DJ Delorie wrote:
> If you still disagree, let's first figure out what the right
> relationship between TYPE_SIZE and TYPE_SIZE_UNIT is, for types that
> aren't a multiple of BITS_PER_UNIT.
My suggestion: TYPE_SIZE should always be TYPE_SIZE_UNIT times
BITS_PER_UNIT, so including any padding bits (and so should not exist,
really - it's an extra pointer bulking up lots of trees with redundant
information), while TYPE_PRECISION is what gives the number of value /
sign bits. If you're allocating bit-fields, TYPE_PRECISION will say how
many bits to use; if you're allocating registers (which might not always
correspond neatly to multiples of BITS_PER_UNIT), TYPE_MODE is what's
relevant; otherwise, you're allocating whole bytes in memory and can use
TYPE_SIZE_UNIT.
--
Joseph S. Myers
joseph@codesourcery.com