Bit-field patch, resurrected

Joseph S. Myers jsm@polyomino.org.uk
Fri Apr 9 20:06:00 GMT 2004


On Thu, 1 Apr 2004, Richard Henderson wrote:

> On Tue, Mar 16, 2004 at 09:50:14PM +0000, Joseph S. Myers wrote:
> > +  /* For ENUMERAL_TYPEs, must check the mode of the types, not the precision;
> > +     in C++ they have precision set to match their range, but may use a wider
> >       mode to match an ABI.  If we change modes, we may wind up with bad
> > +     conversions.  For INTEGER_TYPEs, must check the precision as well, so
> > +     as to yield correct results for bit-field types.  */
> > +  mode_only_needed = (TREE_CODE (type) == ENUMERAL_TYPE);
> 
> How does this interact with the enum bitfield extension?

It turned out that the enum bit-fields (of narrower width) could get the
special integer type rather than an enum type without causing problems for
debug info (unlike with the original patch; I don't know why).

> > +#define LANG_HOOKS_REDUCE_BIT_FIELD_OPERATIONS false
> 
> Is this really the correct default?  I guess it's the safe one for now,
> but I would expect that *any* language that sets TYPE_PRECISION small
> would want the semantics controlled by this flag.

C++ enum types get TYPE_PRECISION set artificially small.  I don't know
that this means they want code generated to reduce widths.  I didn't
investigate why Ada didn't like the other default.

-- 
Joseph S. Myers
jsm@polyomino.org.uk



More information about the Gcc-patches mailing list