This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: More on type sizes
- To: amylaar at cygnus dot co dot uk
- Subject: Re: More on type sizes
- From: kenner at vlsi1 dot ultra dot nyu dot edu (Richard Kenner)
- Date: Wed, 29 Dec 99 18:35:51 EST
- Cc: gcc at gcc dot gnu dot org
When a size is caluclated in bytes, we want to use TYPE_SIZE so that we
get the expected overlow effects.
When a size is calculated in bits, we don't want them.
Yes, I understand that, so let me rephrase my question: when do we want the
size of a type in bits?
Note that not only the size, but also the offset of a bitfield has to
be expressed in single bits - unless we want to use a representation
as sum of a multiple of BITS_PER_UNIT plus a single-bit count that is
smaller than BITS_PER_UNIT.
Well, we always used to do that, but the problem is that I believe this
calculation is now being done in "mixed mode": some in sizetype and some
in bitsizetype. But the definition of DECL_FIELD_BITPOS is in
bitsizetype, so the calculation, on a 32-bit machine, will be done in 64 bits
if the value is a variable.
So I think we either have to always view it as a PLUS_EXPR of MULT_EXPR
of a CONVERT_EXPR of a sizetype vaule and a constant in bitsizetype (which I
think is a mess) or have two fields: DECL_POSITION, which is the position
in bytes (in sizetype) and a DECL_FIELD_BITPOS, which is a bitsizetype value
(currently always a constant less than BITS_PER_UNIT) and gets added to
DECL_POSITION after the appropriate multiplication and conversion. I think
the latter is the best approach. What do others think?