This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Fix PRs 52080, 52097 and 48124, rewrite bitfield expansion, enable the C++ memory model wrt bitfields everywhere


> This patch also completely replaces get_bit_range (which is where
> PR52097 ICEs) by a trivial implementation.

How does it short-circuit the decision made by get_best_mode exactly?  By 
making get_bit_range return non-zero in more cases?

> There is PR52134 which will make this patch cause 1 gnat regression.

This looks rather benign to me.

> 	* gimplify.c (gimplify_expr): Translate bitfield accesses
> 	to BIT_FIELD_REFs of the representative.

This part isn't in the patch.

> + /* Return a new underlying object for a bitfield started with FIELD.  */
> +
> + static tree
> + start_bitfield_representative (tree field)
> + {
> +   tree repr = make_node (FIELD_DECL);
> +   DECL_FIELD_OFFSET (repr) = DECL_FIELD_OFFSET (field);
> +   /* Force the representative to begin at an BITS_PER_UNIT aligned

...at a BITS_PER_UNIT aligned...

> +      boundary - C++ may use tail-padding of a base object to
> +      continue packing bits so the bitfield region does not start
> +      at bit zero (see g++.dg/abi/bitfield5.C for example).
> +      Unallocated bits may happen for other reasons as well,
> +      for example Ada which allows explicit bit-granular structure layout.
>  */ +   DECL_FIELD_BIT_OFFSET (repr)
> +     = size_binop (BIT_AND_EXPR,
> + 		  DECL_FIELD_BIT_OFFSET (field),
> + 		  bitsize_int (~(BITS_PER_UNIT - 1)));
> +   SET_DECL_OFFSET_ALIGN (repr, DECL_OFFSET_ALIGN (field));
> +   DECL_SIZE (repr) = DECL_SIZE (field);
> +   DECL_PACKED (repr) = DECL_PACKED (field);
> +   DECL_CONTEXT (repr) = DECL_CONTEXT (field);
> +   return repr;

Any particular reason to set DECL_SIZE but not DECL_SIZE_UNIT here?  They are 
generally set together.

> +
> + /* Finish up a bitfield group that was started by creating the underlying
> +    object REPR with the last fied in the bitfield group FIELD.  */

...the last field...

> +   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
> +   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
> +
> +   /* Find the smallest nice mode to use.
> +      ???  Possibly use get_best_mode with appropriate arguments instead
> +      (which would eventually require splitting representatives here).  */
> +   for (modesize = bitsize; modesize <= maxbitsize; modesize +=
> BITS_PER_UNIT) +     {
> +       mode = mode_for_size (modesize, MODE_INT, 1);
> +       if (mode != BLKmode)
> + 	break;
> +     }

The double loop looks superfluous.  Why not re-using the implementation of 
smallest_mode_for_size?

  for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode;
       mode = GET_MODE_WIDER_MODE (mode))
    if (GET_MODE_PRECISION (mode) >= bitsize)
      break;

  if (mode != VOIDmode && GET_MODE_PRECISION (mode) > maxbitsize)
    mode = VOIDmode;

  if (mode == VOIDmode)
    ...

-- 
Eric Botcazou


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]