This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [patch 1/4] change specific int128 -> generic intN


> That's what'll need fixing then.

Can I change TYPE_SIZE to TYPE_SIZE_WITH_PADDING then?  Because it's
not reflecting the type's size any more.  Why do we have to round up a
type's size anyway?  That's a pointless assumption *unless* you're
allocating memory space for it, and in that case, you want
TYPE_SIZE_UNITS anyway.

> I doubt there are too many places that require changing.

I don't doubt it, because I've been fighting these assumptions for
years.

> > Heck, most of gcc is oblivious to the idea that types might not be
> > powers-of-two in size.  GCC doesn't even bother with a
> > DECL_PRECISION.
> 
> Sure - why would you even need one?

Why do we need to have DECL_SIZE_UNITS (the size of the type, rounded
up to whole number of bytes) and DECL_SIZE (the size of the type,
rounded up to whole number of bytes), yet not have something that says
how big the decl *really is* ?

A pointer on MSP430 is 20 bits.  All the general registers are 20
bits.  Not 16, and not 24.  20.  There's nothing in a decl that says
"I'm 20 bits" and inevitably it ends up being SImode instead of
PSImode.

> > It seems to work just fine in testing, and I'm trying to make it
> > non-fundamental.
> 
> I also think this is not a very good idea.

Then please provide a "very good idea" for how to teach gcc about true
20-bit types in a system with 8-bit memory and 16-bit words.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]