This is the mail archive of the
mailing list for the GCC project.
Re: [patch 1/4] change specific int128 -> generic intN
- From: DJ Delorie <dj at redhat dot com>
- To: Eric Botcazou <ebotcazou at adacore dot com>
- Cc: gcc-patches at gcc dot gnu dot org
- Date: Fri, 11 Jul 2014 12:30:47 -0400
- Subject: Re: [patch 1/4] change specific int128 -> generic intN
- Authentication-results: sourceware.org; auth=none
- References: <201404142303 dot s3EN3ONP009938 at greed dot delorie dot com> <1878297 dot 97iUzkpC0g at polaris> <201407101634 dot s6AGYm4o019679 at greed dot delorie dot com> <1528221 dot 6ADjPgnoBP at polaris>
> > PSImode is 20 bits, fits in a 20 bit register, and uses 20 bit operations.
> Then why do you need this change?
Because parts of the gcc code use the byte size instead of the bit
size, or round up, or assume powers-of-two sizes.
> > - TYPE_SIZE (type) = bitsize_int (GET_MODE_BITSIZE (TYPE_MODE (type)));
> > + TYPE_SIZE (type) = bitsize_int (GET_MODE_PRECISION (TYPE_MODE
> > (type))); TYPE_SIZE_UNIT (type) = size_int (GET_MODE_SIZE (TYPE_MODE
> > (type))); break;
> What are GET_MODE_BITSIZE and GET_MODE_PRECISION for PSImode?
It *should* be 20 and 20 for msp430. But GET_MODE_BITSIZE returns 32,
because it's a macro that does GET_MODE_SIZE * BITS_PER_UNIT, so it
cannot return 20.
> > If a type is 17-20 bits, PSImode is chosen. If it's 21 bits or
> > larger, SImode is chosen. If it's 16 or fewer bits, HImode is chosen.
> Size or precision? That's the crux of the matter.
GCC typically uses size for "fits in a" tests.