This is the mail archive of the
mailing list for the GCC project.
Re: [patch 1/4] change specific int128 -> generic intN
- From: DJ Delorie <dj at redhat dot com>
- To: Eric Botcazou <ebotcazou at adacore dot com>
- Cc: gcc-patches at gcc dot gnu dot org
- Date: Thu, 3 Jul 2014 12:12:02 -0400
- Subject: Re: [patch 1/4] change specific int128 -> generic intN
- Authentication-results: sourceware.org; auth=none
- References: <201404142303 dot s3EN3ONP009938 at greed dot delorie dot com> <23409176 dot SSiGLCXs8E at polaris> <201407021457 dot s62EvTOm016332 at greed dot delorie dot com> <1920647 dot vUrbzv2NSg at polaris>
> And the hardware really loads 20 bits and not 24 bits? If so, I
> think you might want to consider changing the unit to 4 bits instead
> of 8 bits. If no, the mode is padded and has 24-bit size so why is
> setting TYPE_PRECISION to 20 not sufficient to achieve what you
The hardware transfers data in and out of byte-oriented memory in
TYPE_SIZE_UNITS chunks. Once in a hardware register, all operations
are either 8, 16, or 20 bits (TYPE_SIZE) in size. So yes, values are
padded in memory, but no, they are not padded in registers.
Setting TYPE_PRECISION is mostly useless, because most of gcc assumes
it's the same as TYPE_SIZE and ignores it. Heck, most of gcc is
oblivious to the idea that types might not be powers-of-two in size.
GCC doesn't even bother with a DECL_PRECISION.
> > Thus, in these cases, TYPE_SIZE and TYPE_SIZE_UNIT no longer have
> > a "* BITS_PER_UNIT" mathematical relationship.
> I'm skeptical this can work, it's pretty fundamental.
It seems to work just fine in testing, and I'm trying to make it