This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: Size changes, round 1
- To: mark at codesourcery dot com
- Subject: Re: Size changes, round 1
- From: kenner at vlsi1 dot ultra dot nyu dot edu (Richard Kenner)
- Date: Sun, 20 Feb 00 18:49:27 EST
- Cc: gcc-patches at gcc dot gnu dot org
I think you're missing my point. TYPE_SIZE is size-in-bits, and
TYPE_SIZE_UNIT is size-in-bytes.
Right.
Each has a fixed precision, and it's the same, right?
No! That's the *whole point* and why, I think, you are having trouble
understand the issues I'm raising.
On 32-bit machines, the precision of TYPE_SIZE is 64 and
TYPE_SIZE_UNIT is 32.
So, if something takes a number of bits that is not divisible by 8,
but bigger than the precision of TYPE_SIZE, that we're out of luck --
But that can't happen, since TYPE_SIZE is defined to have a precision
which is at least log2 (BITS_PER_UNIT) greater than TYPE_SIZE_UNIT!
See set_sizetype in stor-layout.
I'll not say anything further about the actual representation we're
using -- I don't think it's relevant to the main point I'm trying to
make, which is about engineering robust software.
I'm usually the first person to argue for robustness, but I simply
disagree that merging these two different fields will increase
robustness: indeed I feel quite the contrary: it will make things much
harder to maintain because every new way of updating the sizes will
require a new function to be added and used just in one place.