This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: Size changes, round 1
- To: kenner at vlsi1 dot ultra dot nyu dot edu
- Subject: Re: Size changes, round 1
- From: Mark Mitchell <mark at codesourcery dot com>
- Date: Sat, 19 Feb 2000 18:34:34 -0800
- Cc: gcc-patches at gcc dot gnu dot org
- Organization: CodeSourcery, LLC
- References: <10002200205.AA18624@vlsi1.ultra.nyu.edu>
>>>>> "Richard" == Richard Kenner <kenner@vlsi1.ultra.nyu.edu> writes:
Mark> I'd be inclined to use byte-sized types everywhere --
Mark> C, C++, and Java, at least, don't have bit-sized types.
Richard> As I understand it, Chill does, though. Ada does too,
Richard> but this gets totally hidden in the front end.
OK -- that suggests we might need bitsize types. Perhaps there could
be a single global scaling factor applied everywhere. So, front-ends
could choose -- have bitsize types, but give up very large sized
types, or have bytesize types, but give up types like `3 bits'.
Richard> (Bitfields are not a separate type. The type of the
Richard> field is still `int', or whatever; it is the declaration
Richard> for the bitfield that carries the width.)
Richard> Right, but the FIELD_DECL has a size that isn't a
Richard> multiple of bytes. And then you have the issue fo the
Richard> bit position, which can't count bytes.
Agreed on both counts. However, this argument doesn't persuade me --
*one* special case seems OK to me. So, if DECL_BIT_FIELD is set, then
the DECL_SIZE is in bits, not bytes. That's OK -- a bit field with
2^33 bits is silly, and not supported by the C or C++ language specs.
I agree that the bit position needs to support bits, and might need to
support a bit position up to, say, 2^35 on a 32-bit machine. (If,
say, you made a structure corresponding to the entire address space,
which people sometimes do.) But, again, we only need this in a
FIELD_DECL -- not everywhere, as in your patch.
Your patch gives us maximum generality -- I could make some
declarations use `7' as the DECL_SIZE_UNIT, and others could use `17'.
That's nice -- but we don't really need that. In practice, there are
two interesting units: bits and bytes. We're using a full pointer to
distinguish between these two cases, and we're requiring everyone
manipulating these sizes to remember to do the scaling. We have do
that even in a front-end that only uses one size unit because someday
someone might make a back-end that creates a special type with a funny
unit. This is a very high maintenance price to pay.
In my opinion, this is one of the classic mistakes we make engineering
free software in general, and GCC in particular: we add features that
are appealingly general, but expensive to maintain. We then end up
with bugs.
I think that if we're really going to do this, we should, at the very
least, create a structure type:
struct size_type {
tree scaling_factor;
tree magnitude;
};
and a bunch of routines for manipulating these. DECL_SIZE, TYPE_SIZE,
etc. should return pointers to these things. The C type-system would
then prevent us from making some of the more obvious mistakes. That
would make me much more comfortable -- it's clean, and it prevents
bugs.
I missed your earlier attempt at discussion. I'm sorry. I would have
argued this point of view pretty strongly. If I'd reviewed your
patch, and you didn't have checkin privileges, I would have required
that you at least encapsulate the sizes in structures as above.
That's just good engineering.
I'm happy to be educated further, but I think I understand the basic
issues. We need to engineer a good solution, not just any solution.
Let's back up, and treat your patch like any other. We would ask:
o What problem are you trying to solve? Please articulate this
clearly.
o Are there alternative solutions to the one you proposed?
o What are the costs and benefits of your solution?
Please work through this with me. Thanks,
--
Mark Mitchell mark@codesourcery.com
CodeSourcery, LLC http://www.codesourcery.com