This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Bit-field patch, resurrected


On Thu, 1 Apr 2004, Richard Henderson wrote:

> On Tue, Mar 16, 2004 at 09:50:14PM +0000, Joseph S. Myers wrote:
> > +  /* For ENUMERAL_TYPEs, must check the mode of the types, not the precision;
> > +     in C++ they have precision set to match their range, but may use a wider
> >       mode to match an ABI.  If we change modes, we may wind up with bad
> > +     conversions.  For INTEGER_TYPEs, must check the precision as well, so
> > +     as to yield correct results for bit-field types.  */
> > +  mode_only_needed = (TREE_CODE (type) == ENUMERAL_TYPE);
> 
> How does this interact with the enum bitfield extension?

It turned out that the enum bit-fields (of narrower width) could get the
special integer type rather than an enum type without causing problems for
debug info (unlike with the original patch; I don't know why).

> > +#define LANG_HOOKS_REDUCE_BIT_FIELD_OPERATIONS false
> 
> Is this really the correct default?  I guess it's the safe one for now,
> but I would expect that *any* language that sets TYPE_PRECISION small
> would want the semantics controlled by this flag.

C++ enum types get TYPE_PRECISION set artificially small.  I don't know
that this means they want code generated to reduce widths.  I didn't
investigate why Ada didn't like the other default.

-- 
Joseph S. Myers
jsm@polyomino.org.uk


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]