This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Size changes, round 1


    OK -- that suggests we might need bitsize types.  Perhaps there could
    be a single global scaling factor applied everywhere.  So, front-ends
    could choose -- have bitsize types, but give up very large sized
    types, or have bytesize types, but give up types like `3 bits'.

Unfortunately, Ada is a language that needs both.  I don't know about
Chill, but suspect it does as well from what I can see.

    I agree that the bit position needs to support bits, and might need to
    support a bit position up to, say, 2^35 on a 32-bit machine.  (If,
    say, you made a structure corresponding to the entire address space,
    which people sometimes do.)  But, again, we only need this in a
    FIELD_DECL -- not everywhere, as in your patch.

Note that I didn't start this: the addition of TYPE_SIZE_UNIT did.  If we're
going to keep that and do things consistently, we also need the fields I
added.  If we don't have them, then we need to remove TYPE_SIZE_UNIT or
replace it with something else.

As I said, I tried to get a concensus on this quite a while ago by asking the
reason why this was added and if it was still needed.  The concensus from
that discussion is that it was.  So I went in the direction of finishing that
support.  Had the consensus been in some other direction, I would have gone
that way.

I must admit to being alittle annoyed that I raised the issue quite a while
ago, left several weeks for discussion, there wasn't much, and now that I'm
starting to implement the results of that discussion, *now* there's renewed
interest in it.

    In practice, there are two interesting units: bits and bytes.  We're
    using a full pointer to distinguish between these two cases, and we're
    requiring everyone manipulating these sizes to remember to do the
    scaling.  

Not exactly.  Some things need one and some need the other.  We always had
places in the compiler that used the sizes in bytes and others that used by
the size in bits.  The change to add TYPE_SIZE_UNIT (and my extension to it)
simplify that code by only doing the conversion once.

    We have do that even in a front-end that only uses one size unit
    because someday someone might make a back-end that creates a special
    type with a funny unit.  This is a very high maintenance price to pay.

I don't follow what you're trying to say here.  Front ends, in general, don't
do much with type and decl sizes.

    I think that if we're really going to do this, we should, at the very
    least, create a structure type:

      struct size_type { 
        tree scaling_factor;
        tree magnitude;
      };

    and a bunch of routines for manipulating these.  DECL_SIZE, TYPE_SIZE,
    etc. should return pointers to these things.  The C type-system would
    then prevent us from making some of the more obvious mistakes.  That
    would make me much more comfortable -- it's clean, and it prevents
    bugs.

I don't see what you gain by this.  You really do want both values for each
type for the reasons that were given when TYPE_SIZE_UNIT was added and
if you have those, DECL_SIZE_UNIT is just a logical extension.  If you
have just one size with a scale factor, you only gain one of the benefits
of the original change.

Note that you need to think of the cases where sizes are variable.  If you
have code that constantly converts between bit and byte sizes, especially if
you make the bit sizes a wider type, you end up generating extraordinarily
bad code.  The case where sizes are constant is *not* the case of interest here.

  o What problem are you trying to solve?  Please articulate this
    clearly.

The change to add TYPE_SIZE_UNIT was missing a number of important
pieces: this is one and the other related to bit sizes.

  o Are there alternative solutions to the one you proposed?

Back out the TYPE_SIZE_UNIT changes.
  
  o What are the costs and benefits of your solution?

The benefits are all the original ones for TYPE_SIZE_UNIT, which I can't
articulate too well since I didn't write it.

As I said, when I first saw the TYPE_SIZE_UNIT stuff, my inclination was the
same as yours: I didn't think we needed it.  But some of the arguments
presented when I took that position convinced me that it does have some
value.  This change indeed shows that it simplifies part of the compiler.

My analysis is that the benefit of having separate byte and bit sizes is that
we can support objects whose size is bytes is up to the maximum allowed by
SIZE_TYPE, not 1/8 of that.  It also means we don't end up with repeated
divisions and multiplications by 8 that we have to worry about optimizing out
when dealing with variable sizes and hence variable field positions.  The
cost is memory utilization in the compiler: one pointer in types and two
pointers in decls.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]