This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [patch 1/4] change specific int128 -> generic intN
- From: Bernd Schmidt <bernds at codesourcery dot com>
- To: DJ Delorie <dj at redhat dot com>, Eric Botcazou <ebotcazou at adacore dot com>
- Cc: <gcc-patches at gcc dot gnu dot org>
- Date: Thu, 3 Jul 2014 18:29:33 +0200
- Subject: Re: [patch 1/4] change specific int128 -> generic intN
- Authentication-results: sourceware.org; auth=none
- References: <201404142303 dot s3EN3ONP009938 at greed dot delorie dot com> <23409176 dot SSiGLCXs8E at polaris> <201407021457 dot s62EvTOm016332 at greed dot delorie dot com> <1920647 dot vUrbzv2NSg at polaris> <201407031612 dot s63GC2CM030078 at greed dot delorie dot com>
On 07/03/2014 06:12 PM, DJ Delorie wrote:
The hardware transfers data in and out of byte-oriented memory in
TYPE_SIZE_UNITS chunks. Once in a hardware register, all operations
are either 8, 16, or 20 bits (TYPE_SIZE) in size. So yes, values are
padded in memory, but no, they are not padded in registers.
Setting TYPE_PRECISION is mostly useless, because most of gcc assumes
it's the same as TYPE_SIZE and ignores it.
That's what'll need fixing then. I doubt there are too many places that
require changing.
Also, the above seems inaccurate:
$ grep TYPE_PREC *.c|wc -l
633
$ grep TYPE_SIZE *.c|wc -l
551
Heck, most of gcc is
oblivious to the idea that types might not be powers-of-two in size.
GCC doesn't even bother with a DECL_PRECISION.
Sure - why would you even need one?
Thus, in these cases, TYPE_SIZE and TYPE_SIZE_UNIT no longer have
a "* BITS_PER_UNIT" mathematical relationship.
I'm skeptical this can work, it's pretty fundamental.
It seems to work just fine in testing, and I'm trying to make it
non-fundamental.
I also think this is not a very good idea.
Bernd