This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Memory corruption due to word sharing


On Fri, 3 Feb 2012, Richard Guenther wrote:

> On Fri, 3 Feb 2012, Richard Guenther wrote:
> 
> > On Fri, 3 Feb 2012, Richard Guenther wrote:
> > 
> > > On Thu, 2 Feb 2012, Aldy Hernandez wrote:
> > > 
> > > > Linus Torvalds <torvalds@linux-foundation.org> writes:
> > > > 
> > > > > Seriously - is there any real argument *against* just using the base
> > > > > type as a hint for access size?
> > > > 
> > > > If I'm on the hook for attempting to fix this again, I'd also like to
> > > > know if there are any arguments against using the base type.
> > > 
> > > Well, if you consider
> > > 
> > > struct {
> > >   int i : 1;
> > >   char c;
> > > };
> > > 
> > > then you'll realize that 'i' has SImode (and int type) but the
> > > underlying bitfield has only 1 byte size (thus, QImode) and
> > > 'c' starts at offset 1.
> > > 
> > > So no, you cannot use the base type either.
> > > 
> > > I've playing with the following patch yesterday, which computes
> > > an "underlying object" for all bitfields and forcefully lowers
> > > all bitfield component-refs to use that underlying object
> > > (just to check correctness, it doesn't generate nice code as
> > > BIT_FIELD_REF on memory is effectively resulting in the same
> > > code as if using the bitfield FIELD_DECLs directly - we'd
> > > need to explicitely split things into separate stmts with RMW
> > > cycles).
> > > 
> > > You should be able to re-use the underlying object compute though
> > > (and we can make it more intelligent even) during expansion
> > > for the C++ memory model (and in fact underlying object compute
> > > might just do sth different dependent on the memory model in
> > > effect).
> > > 
> > > Disclaimer: untested.
> > 
> > The following works (roughly, still mostly untested).  SRA needs
> > a fix (included) and the gimplify.c hunk really only shows what
> > we are supposed to be able to do (access the representative).
> > As-is SRA could now do a nice job on bitfields, but that needs
> > some changes - or we lower all bitfield ops in some extra pass
> > (if not then expand would need to be changed to look at the
> > representatives instead).
> > 
> > Still the idea is to compute all these things up-front during
> > type layout instead of re-discovering them at each bitfield
> > access we expand in get_bit_range.  And we can use that information
> > consistently across passes.
> > 
> > We should of course try harder to avoid adding a new field to
> > struct tree_field_decl - DECL_INITIAL came to my mind, but
> > the C frontend happens to use that for bitfields ... while
> > it probably could as well use lang_type.enum_{min,max}?
> > 
> > Comments?
> 
> Funnily C++ uses tail-padding of base types to pack bitfields
> and thus I run into
> 
>   gcc_assert (maxbitsize % BITS_PER_UNIT == 0);
> 
> Testcase is for example g++.dg/abi/bitfield5.C, bit layout annotated:
> 
> struct A {
>   virtual void f();
>   int f1 : 1;                             <--- bit 64
> };
> 
> struct B : public A {
>   int f2 : 1;  // { dg-warning "ABI" }    <--- bit 65
>   int : 0;
>   int f3 : 4;
>   int f4 : 3;
> };
> 
> maybe it was a bug (above happens with -fabi-version=1 only),
> but certainly an ABI may specify that we should do that packing.
> 
> What does the C++ memory model say here?  (incidentially that's
> one case I was worried about when reviewing your patches,
> just I didn't think of _bitfield_ tail-packing ... ;)).
> 
> I suppose I could just force the bitfield region to start
> at a byte boundary.

The following variant does that.  It gives up the re-writing at
gimplification time for now (proper lowering should happen later)
and instead uses the new information in get_bit_range (but
unconditionally as we need it for correctness for PR48124 and
PR52080 for example).

I'm running a bootstrap & regtest on x86_64-unknown-linux-gnu now.

Further work would be to improve representative construction
eventually making it target dependent (for the PR52080 testcase
we are now generating QImode stores on IA64 rather than SImode
which we probably should use).  And probably more immediate - search
for a better home for DECL_BIT_FIELD_REPRESENTATIVE.

Richard.

2012-02-03  Richard Guenther  <rguenther@suse.de>

	* tree.h (DECL_BIT_FIELD_REPRESENTATIVE): New define.
	(struct tree_field_decl): New field bit_field_representative.
	* stor-layout.c (start_bitfield_representative): New function.
	(finish_bitfield_representative): Likewise.
	(finish_bitfield_layout): Likewise.
	(finish_record_layout): Call finish_bitfield_layout.
	* tree-streamer-in.c (lto_input_ts_field_decl_tree_pointers):
	Stream DECL_BIT_FIELD_REPRESENTATIVE.
	* tree-streamer-out.c (write_ts_field_decl_tree_pointers): Likewise.

	PR middle-end/52080
	PR middle-end/52097
	PR middle-end/48124
	* expr.c (get_bit_range): Unconditionally extract bitrange
	from DECL_BIT_FIELD_REPRESENTATIVE.

	* gimplify.c (gimplify_expr): Translate bitfield accesses
	to BIT_FIELD_REFs of the representative.

        * tree-sra.c (create_access_replacement): Only rename the
        replacement if we can rewrite it into SSA form.  Properly
        mark register typed replacements that we cannot rewrite
        with TREE_ADDRESSABLE.

	* gcc.dg/torture/pr48124-1.c: New testcase.
	* gcc.dg/torture/pr48124-2.c: Likewise.
	* gcc.dg/torture/pr48124-3.c: Likewise.
	* gcc.dg/torture/pr48124-4.c: Likewise.

Index: gcc/stor-layout.c
===================================================================
*** gcc/stor-layout.c.orig	2012-02-06 14:08:03.000000000 +0100
--- gcc/stor-layout.c	2012-02-06 15:54:54.000000000 +0100
*************** finalize_type_size (tree type)
*** 1722,1727 ****
--- 1722,1903 ----
      }
  }
  
+ /* Return a new underlying object for a bitfield started with FIELD.  */
+ 
+ static tree
+ start_bitfield_representative (tree field)
+ {
+   tree repr = make_node (FIELD_DECL);
+   DECL_FIELD_OFFSET (repr) = DECL_FIELD_OFFSET (field);
+   /* Force the representative to begin at an BITS_PER_UNIT aligned
+      boundary - C++ may use tail-padding of a base object to
+      continue packing bits so the bitfield region does not start
+      at bit zero (see g++.dg/abi/bitfield5.C for example).
+      Unallocated bits may happen for other reasons as well,
+      for example Ada which allows explicit bit-granular structure layout.  */
+   DECL_FIELD_BIT_OFFSET (repr)
+     = size_binop (BIT_AND_EXPR,
+ 		  DECL_FIELD_BIT_OFFSET (field),
+ 		  bitsize_int (~(BITS_PER_UNIT - 1)));
+   SET_DECL_OFFSET_ALIGN (repr, DECL_OFFSET_ALIGN (field));
+   DECL_SIZE (repr) = DECL_SIZE (field);
+   DECL_PACKED (repr) = DECL_PACKED (field);
+   DECL_CONTEXT (repr) = DECL_CONTEXT (field);
+   return repr;
+ }
+ 
+ /* Finish up a bitfield group that was started by creating the underlying
+    object REPR with the last fied in the bitfield group FIELD.  */
+ 
+ static void
+ finish_bitfield_representative (tree repr, tree field)
+ {
+   unsigned HOST_WIDE_INT bitsize, maxbitsize, modesize;
+   enum machine_mode mode;
+   tree nextf, size;
+ 
+   size = size_diffop (DECL_FIELD_OFFSET (field),
+ 		      DECL_FIELD_OFFSET (repr));
+   gcc_assert (host_integerp (size, 1));
+   bitsize = (tree_low_cst (size, 1) * BITS_PER_UNIT
+ 	     + tree_low_cst (DECL_FIELD_BIT_OFFSET (field), 1)
+ 	     - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1)
+ 	     + tree_low_cst (DECL_SIZE (field), 1));
+ 
+   /* Now nothing tells us how to pad out bitsize ...  */
+   nextf = DECL_CHAIN (field);
+   while (nextf && TREE_CODE (nextf) != FIELD_DECL)
+     nextf = DECL_CHAIN (nextf);
+   if (nextf)
+     {
+       tree maxsize;
+       /* If there was an error, the field may be not layed out
+          correctly.  Don't bother to do anything.  */
+       if (TREE_TYPE (nextf) == error_mark_node)
+ 	return;
+       maxsize = size_diffop (DECL_FIELD_OFFSET (nextf),
+ 			     DECL_FIELD_OFFSET (repr));
+       gcc_assert (host_integerp (maxsize, 1));
+       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
+ 		    + tree_low_cst (DECL_FIELD_BIT_OFFSET (nextf), 1)
+ 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
+     }
+   else
+     {
+       tree maxsize = size_diffop (TYPE_SIZE_UNIT (DECL_CONTEXT (field)),
+ 				  DECL_FIELD_OFFSET (repr));
+       gcc_assert (host_integerp (maxsize, 1));
+       maxbitsize = (tree_low_cst (maxsize, 1) * BITS_PER_UNIT
+ 		    - tree_low_cst (DECL_FIELD_BIT_OFFSET (repr), 1));
+     }
+ 
+   /* Only if we don't artificially break up the representative in
+      the middle of a large bitfield with different possibly
+      overlapping representatives.  And all representatives start
+      at byte offset.  */
+   gcc_assert (maxbitsize % BITS_PER_UNIT == 0);
+ 
+   /* Round up bitsize to multiples of BITS_PER_UNIT.  */
+   bitsize = (bitsize + BITS_PER_UNIT - 1) & ~(BITS_PER_UNIT - 1);
+ 
+   /* Find the smallest nice mode to use.
+      ???  Possibly use get_best_mode with appropriate arguments instead
+      (which would eventually require splitting representatives here).  */
+   for (modesize = bitsize; modesize <= maxbitsize; modesize += BITS_PER_UNIT)
+     {
+       mode = mode_for_size (modesize, MODE_INT, 1);
+       if (mode != BLKmode)
+ 	break;
+     }
+ 
+   if (mode == BLKmode)
+     {
+       /* We really want a BLKmode representative only as a last resort,
+          considering the member b in
+ 	   struct { int a : 7; int b : 17; int c; } __attribute__((packed));
+ 	 Otherwise we simply want to split the representative up
+ 	 allowing for overlaps within the bitfield region as required for
+ 	   struct { int a : 7; int b : 7; int c : 10; int d; } __attribute__((packed));
+ 	 [0, 15] HImode for a and b, [8, 23] HImode for c.  */
+       DECL_SIZE (repr) = bitsize_int (bitsize);
+       DECL_SIZE_UNIT (repr) = size_int (bitsize / BITS_PER_UNIT);
+       DECL_MODE (repr) = BLKmode;
+       TREE_TYPE (repr) = build_array_type_nelts (unsigned_char_type_node,
+ 						 bitsize / BITS_PER_UNIT);
+     }
+   else
+     {
+       DECL_SIZE (repr) = bitsize_int (modesize);
+       DECL_SIZE_UNIT (repr) = size_int (modesize / BITS_PER_UNIT);
+       DECL_MODE (repr) = mode;
+       TREE_TYPE (repr) = lang_hooks.types.type_for_mode (mode, 1);
+     }
+ }
+ 
+ /* Compute and set FIELD_DECLs for the underlying objects we should
+    use for bitfield access for the structure layed out with RLI.  */
+ 
+ static void
+ finish_bitfield_layout (record_layout_info rli)
+ {
+   tree field, prev;
+   tree repr = NULL_TREE;
+ 
+   /* Unions would be special, for the ease of type-punning optimizations
+      we could use the underlying type as hint for the representative
+      if the bitfield would fit and the representative would not exceed
+      the union in size.  */
+   if (TREE_CODE (rli->t) != RECORD_TYPE)
+     return;
+ 
+   for (prev = NULL_TREE, field = TYPE_FIELDS (rli->t);
+        field; field = DECL_CHAIN (field))
+     {
+       if (TREE_CODE (field) != FIELD_DECL)
+ 	continue;
+ 
+       if (!repr
+ 	  && DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Start new representative.  */
+ 	  repr = start_bitfield_representative (field);
+ 	}
+       else if (repr
+ 	       && ! DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Finish off new representative.  */
+ 	  finish_bitfield_representative (repr, prev);
+ 	  repr = NULL_TREE;
+ 	}
+       else if (DECL_BIT_FIELD_TYPE (field))
+ 	{
+ 	  /* Zero-size bitfields finish off a representative and
+ 	     do not have a representative themselves.  */
+ 	  if (integer_zerop (DECL_SIZE (field)))
+ 	    {
+ 	      finish_bitfield_representative (repr, prev);
+ 	      repr = NULL_TREE;
+ 	    }
+ 	  /* FIXME.  A gap finishes off a representative(?).  */
+ 	  else if (0 /* offset + size of repr != offset of field */)
+ 	    {
+ 	      finish_bitfield_representative (repr, prev);
+ 	      repr = start_bitfield_representative (field);
+ 	    }
+ 	}
+       else
+ 	continue;
+ 
+       if (repr)
+ 	DECL_BIT_FIELD_REPRESENTATIVE (field) = repr;
+ 
+       prev = field;
+     }
+ 
+   if (repr)
+     finish_bitfield_representative (repr, prev);
+ }
+ 
  /* Do all of the work required to layout the type indicated by RLI,
     once the fields have been laid out.  This function will call `free'
     for RLI, unless FREE_P is false.  Passing a value other than false
*************** finish_record_layout (record_layout_info
*** 1742,1747 ****
--- 1918,1926 ----
    /* Perform any last tweaks to the TYPE_SIZE, etc.  */
    finalize_type_size (rli->t);
  
+   /* Compute bitfield representatives.  */
+   finish_bitfield_layout (rli);
+ 
    /* Propagate TYPE_PACKED to variants.  With C++ templates,
       handle_packed_attribute is too early to do this.  */
    for (variant = TYPE_NEXT_VARIANT (rli->t); variant;
Index: gcc/tree.h
===================================================================
*** gcc/tree.h.orig	2012-02-06 14:08:03.000000000 +0100
--- gcc/tree.h	2012-02-06 15:54:54.000000000 +0100
*************** struct GTY(()) tree_decl_with_rtl {
*** 3021,3026 ****
--- 3021,3032 ----
  #define DECL_BIT_FIELD_TYPE(NODE) \
    (FIELD_DECL_CHECK (NODE)->field_decl.bit_field_type)
  
+ /* In a FIELD_DECL, this is a pointer to the storage representative
+    FIELD_DECL.
+    ???  Try harder to find a pointer we can re-use.  */
+ #define DECL_BIT_FIELD_REPRESENTATIVE(NODE) \
+   (FIELD_DECL_CHECK (NODE)->field_decl.bit_field_representative)
+ 
  /* For a FIELD_DECL in a QUAL_UNION_TYPE, records the expression, which
     if nonzero, indicates that the field occupies the type.  */
  #define DECL_QUALIFIER(NODE) (FIELD_DECL_CHECK (NODE)->field_decl.qualifier)
*************** struct GTY(()) tree_field_decl {
*** 3071,3076 ****
--- 3077,3083 ----
  
    tree offset;
    tree bit_field_type;
+   tree bit_field_representative;
    tree qualifier;
    tree bit_offset;
    tree fcontext;
Index: gcc/gimplify.c
===================================================================
*** gcc/gimplify.c.orig	2012-02-06 14:08:03.000000000 +0100
--- gcc/gimplify.c	2012-02-06 15:55:07.000000000 +0100
*************** gimplify_expr (tree *expr_p, gimple_seq
*** 6887,6897 ****
  					fallback != fb_none);
  	  break;
  
  	case ARRAY_REF:
  	case ARRAY_RANGE_REF:
  	case REALPART_EXPR:
  	case IMAGPART_EXPR:
- 	case COMPONENT_REF:
  	case VIEW_CONVERT_EXPR:
  	  ret = gimplify_compound_lval (expr_p, pre_p, post_p,
  					fallback ? fallback : fb_rvalue);
--- 6887,6931 ----
  					fallback != fb_none);
  	  break;
  
+ 	case COMPONENT_REF:
+ #if 0
+ 	  if (DECL_BIT_FIELD_TYPE (TREE_OPERAND (*expr_p, 1)))
+ 	    {
+ 	      tree field = TREE_OPERAND (*expr_p, 1);
+ 	      tree repr = DECL_BIT_FIELD_REPRESENTATIVE (field);
+ 	      gcc_assert (TREE_OPERAND (*expr_p, 2) == NULL_TREE);
+ 	      if (repr != NULL_TREE)
+ 		{
+ 		  tree offset;
+ 		  offset = size_diffop (DECL_FIELD_OFFSET (field),
+ 					DECL_FIELD_OFFSET (repr));
+ 		  offset = size_binop (MULT_EXPR,
+ 				       offset, ssize_int (BITS_PER_UNIT));
+ 		  offset = fold_convert (sbitsizetype, offset);
+ 		  offset = size_binop (PLUS_EXPR, offset,
+ 				       size_diffop (DECL_FIELD_BIT_OFFSET (field),
+ 						    DECL_FIELD_BIT_OFFSET (repr)));
+ 		  *expr_p = build3_loc (input_location,
+ 					BIT_FIELD_REF,
+ 					TREE_TYPE (*expr_p),
+ 					build3_loc (input_location,
+ 						    COMPONENT_REF,
+ 						    TREE_TYPE (repr),
+ 						    TREE_OPERAND (*expr_p, 0),
+ 						    repr,
+ 						    NULL_TREE),
+ 					DECL_SIZE (field), offset);
+ 		  ret = GS_OK;
+ 		  break;
+ 		}
+ 	    }
+ #endif
+ 
+ 	  /* Fall thru.  */
  	case ARRAY_REF:
  	case ARRAY_RANGE_REF:
  	case REALPART_EXPR:
  	case IMAGPART_EXPR:
  	case VIEW_CONVERT_EXPR:
  	  ret = gimplify_compound_lval (expr_p, pre_p, post_p,
  					fallback ? fallback : fb_rvalue);
Index: gcc/tree-sra.c
===================================================================
*** gcc/tree-sra.c.orig	2012-02-06 15:02:29.000000000 +0100
--- gcc/tree-sra.c	2012-02-06 15:54:54.000000000 +0100
*************** create_access_replacement (struct access
*** 1922,1934 ****
  
    repl = create_tmp_var (access->type, "SR");
    add_referenced_var (repl);
!   if (rename)
      mark_sym_for_renaming (repl);
  
!   if (!access->grp_partial_lhs
!       && (TREE_CODE (access->type) == COMPLEX_TYPE
! 	  || TREE_CODE (access->type) == VECTOR_TYPE))
!     DECL_GIMPLE_REG_P (repl) = 1;
  
    DECL_SOURCE_LOCATION (repl) = DECL_SOURCE_LOCATION (access->base);
    DECL_ARTIFICIAL (repl) = 1;
--- 1922,1940 ----
  
    repl = create_tmp_var (access->type, "SR");
    add_referenced_var (repl);
!   if (!access->grp_partial_lhs
!       && rename)
      mark_sym_for_renaming (repl);
  
!   if (TREE_CODE (access->type) == COMPLEX_TYPE
!       || TREE_CODE (access->type) == VECTOR_TYPE)
!     {
!       if (!access->grp_partial_lhs)
! 	DECL_GIMPLE_REG_P (repl) = 1;
!     }
!   else if (access->grp_partial_lhs
! 	   && is_gimple_reg_type (access->type))
!     TREE_ADDRESSABLE (repl) = 1;
  
    DECL_SOURCE_LOCATION (repl) = DECL_SOURCE_LOCATION (access->base);
    DECL_ARTIFICIAL (repl) = 1;
Index: gcc/tree-cfg.c
===================================================================
*** gcc/tree-cfg.c.orig	2012-02-06 14:08:03.000000000 +0100
--- gcc/tree-cfg.c	2012-02-06 15:54:54.000000000 +0100
*************** verify_expr (tree *tp, int *walk_subtree
*** 2863,2879 ****
  		  error ("invalid position or size operand to BIT_FIELD_REF");
  		  return t;
  		}
! 	      else if (INTEGRAL_TYPE_P (TREE_TYPE (t))
! 		       && (TYPE_PRECISION (TREE_TYPE (t))
! 			   != TREE_INT_CST_LOW (TREE_OPERAND (t, 1))))
  		{
  		  error ("integral result type precision does not match "
  			 "field size of BIT_FIELD_REF");
  		  return t;
  		}
! 	      if (!INTEGRAL_TYPE_P (TREE_TYPE (t))
! 		  && (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (t)))
! 		      != TREE_INT_CST_LOW (TREE_OPERAND (t, 1))))
  		{
  		  error ("mode precision of non-integral result does not "
  			 "match field size of BIT_FIELD_REF");
--- 2863,2881 ----
  		  error ("invalid position or size operand to BIT_FIELD_REF");
  		  return t;
  		}
! 	      if (INTEGRAL_TYPE_P (TREE_TYPE (t))
! 		  && (TYPE_PRECISION (TREE_TYPE (t))
! 		      != TREE_INT_CST_LOW (TREE_OPERAND (t, 1))))
  		{
  		  error ("integral result type precision does not match "
  			 "field size of BIT_FIELD_REF");
  		  return t;
  		}
! 	      else if (!INTEGRAL_TYPE_P (TREE_TYPE (t))
! 		       && !AGGREGATE_TYPE_P (TREE_TYPE (t))
! 		       && TYPE_MODE (TREE_TYPE (t)) != BLKmode
! 		       && (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (t)))
! 			   != TREE_INT_CST_LOW (TREE_OPERAND (t, 1))))
  		{
  		  error ("mode precision of non-integral result does not "
  			 "match field size of BIT_FIELD_REF");
Index: gcc/expr.c
===================================================================
*** gcc/expr.c.orig	2012-02-02 16:24:26.000000000 +0100
--- gcc/expr.c	2012-02-06 16:08:44.000000000 +0100
*************** optimize_bitfield_assignment_op (unsigne
*** 4468,4482 ****
  static void
  get_bit_range (unsigned HOST_WIDE_INT *bitstart,
  	       unsigned HOST_WIDE_INT *bitend,
! 	       tree exp, tree innerdecl,
  	       HOST_WIDE_INT bitpos, HOST_WIDE_INT bitsize)
  {
!   tree field, record_type, fld;
!   bool found_field = false;
!   bool prev_field_is_bitfield;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
    /* If other threads can't see this value, no need to restrict stores.  */
    if (ALLOW_STORE_DATA_RACES
        || ((TREE_CODE (innerdecl) == MEM_REF
--- 4468,4493 ----
  static void
  get_bit_range (unsigned HOST_WIDE_INT *bitstart,
  	       unsigned HOST_WIDE_INT *bitend,
! 	       tree exp, tree innerdecl ATTRIBUTE_UNUSED,
  	       HOST_WIDE_INT bitpos, HOST_WIDE_INT bitsize)
  {
!   tree field, repr, offset, t;
!   enum machine_mode mode;
!   int unsignedp, volatilep;
  
    gcc_assert (TREE_CODE (exp) == COMPONENT_REF);
  
+   field = TREE_OPERAND (exp, 1);
+   repr = DECL_BIT_FIELD_REPRESENTATIVE (field);
+   if (!repr)
+     {
+       /* We better have DECL_BIT_FIELD_REPRESENTATIVEs for all
+          FIELD_DECLs the C++ memory model applies to.  */
+       *bitstart = *bitend = 0;
+       return;
+     }
+ 
+ #if 0
    /* If other threads can't see this value, no need to restrict stores.  */
    if (ALLOW_STORE_DATA_RACES
        || ((TREE_CODE (innerdecl) == MEM_REF
*************** get_bit_range (unsigned HOST_WIDE_INT *b
*** 4490,4551 ****
        *bitstart = *bitend = 0;
        return;
      }
  
!   /* Bit field we're storing into.  */
!   field = TREE_OPERAND (exp, 1);
!   record_type = DECL_FIELD_CONTEXT (field);
! 
!   /* Count the contiguous bitfields for the memory location that
!      contains FIELD.  */
!   *bitstart = 0;
!   prev_field_is_bitfield = true;
!   for (fld = TYPE_FIELDS (record_type); fld; fld = DECL_CHAIN (fld))
!     {
!       tree t, offset;
!       enum machine_mode mode;
!       int unsignedp, volatilep;
! 
!       if (TREE_CODE (fld) != FIELD_DECL)
! 	continue;
! 
!       t = build3 (COMPONENT_REF, TREE_TYPE (exp),
! 		  unshare_expr (TREE_OPERAND (exp, 0)),
! 		  fld, NULL_TREE);
!       get_inner_reference (t, &bitsize, &bitpos, &offset,
! 			   &mode, &unsignedp, &volatilep, true);
! 
!       if (field == fld)
! 	found_field = true;
! 
!       if (DECL_BIT_FIELD_TYPE (fld) && bitsize > 0)
! 	{
! 	  if (prev_field_is_bitfield == false)
! 	    {
! 	      *bitstart = bitpos;
! 	      prev_field_is_bitfield = true;
! 	    }
! 	}
!       else
! 	{
! 	  prev_field_is_bitfield = false;
! 	  if (found_field)
! 	    break;
! 	}
!     }
!   gcc_assert (found_field);
! 
!   if (fld)
!     {
!       /* We found the end of the bit field sequence.  Include the
! 	 padding up to the next field and be done.  */
!       *bitend = bitpos - 1;
!     }
!   else
!     {
!       /* If this is the last element in the structure, include the padding
! 	 at the end of structure.  */
!       *bitend = TREE_INT_CST_LOW (TYPE_SIZE (record_type)) - 1;
!     }
  }
  
  /* Returns true if the MEM_REF REF refers to an object that does not
--- 4501,4518 ----
        *bitstart = *bitend = 0;
        return;
      }
+ #endif
  
!   /* ???  As naive as before.
!      FIXME: pass OFFSET to this function, not BITPOS and BITSIZE
!      and fix it up (or assert it does not change, also with respect
!      to TREE_OPERAND (exp, 1)).  Avoid using build3 here.  */
!   t = build3 (COMPONENT_REF, TREE_TYPE (exp),
! 	      TREE_OPERAND (exp, 0), repr, NULL_TREE);
!   get_inner_reference (t, &bitsize, &bitpos, &offset,
! 		       &mode, &unsignedp, &volatilep, true);
!   *bitstart = bitpos;
!   *bitend = bitpos + bitsize;
  }
  
  /* Returns true if the MEM_REF REF refers to an object that does not
Index: gcc/testsuite/gcc.dg/torture/pr48124-1.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-1.c	2012-02-06 16:12:40.000000000 +0100
***************
*** 0 ****
--- 1,33 ----
+ /* { dg-do run } */
+ /* { dg-options "-fno-toplevel-reorder" } */
+ 
+ extern void abort (void);
+ 
+ struct S
+ {
+   signed a : 26;
+   signed b : 16;
+   signed c : 10;
+   volatile signed d : 14;
+ };
+ 
+ static struct S e = { 0, 0, 0, 1 };
+ static int f = 1;
+ 
+ void __attribute__((noinline))
+ foo (void)
+ {
+   e.d = 0;
+   f = 2;
+ }
+ 
+ int
+ main ()
+ {
+   if (e.a || e.b || e.c || e.d != 1 || f != 1)
+     abort ();
+   foo ();
+   if (e.a || e.b || e.c || e.d || f != 2)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-2.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-2.c	2012-02-06 16:12:40.000000000 +0100
***************
*** 0 ****
--- 1,27 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ 
+ static volatile struct S0 {
+     short f3[9];
+     unsigned f8 : 15;
+ } s = {1};
+ static unsigned short sh = 0x1234;
+ 
+ struct S0 a, b;
+ int vi = 0;
+ 
+ void func_4()
+ {
+   s.f8 |= 1;
+   sh = 15;
+   if (vi) a = b;
+ }
+ 
+ int main()
+ {
+   func_4();
+   if (sh != 15)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-3.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-3.c	2012-02-06 16:12:40.000000000 +0100
***************
*** 0 ****
--- 1,32 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ struct S1
+ {
+   int f0;
+   int:1;
+   int f3;
+   int:1;
+   int:0;
+   int f6:1;
+ };
+ int g_13 = 1;
+ volatile struct S1 g_118 = {
+     1
+ };
+ 
+ void __attribute__((noinline))
+ func_46 ()
+ {
+   for (g_13 = 0; g_13 >= 0; g_13 -= 1)
+     g_118.f6 = 0;
+ }
+ 
+ int
+ main ()
+ {
+   func_46 ();
+   if (g_13 != -1)
+     abort ();
+   return 0;
+ }
Index: gcc/testsuite/gcc.dg/torture/pr48124-4.c
===================================================================
*** /dev/null	1970-01-01 00:00:00.000000000 +0000
--- gcc/testsuite/gcc.dg/torture/pr48124-4.c	2012-02-06 16:12:40.000000000 +0100
***************
*** 0 ****
--- 1,28 ----
+ /* { dg-do run } */
+ 
+ extern void abort (void);
+ struct S1 {
+     unsigned f0, f1;
+     unsigned short f2, f3;
+     unsigned f4 : 16;
+     unsigned f5, f6;
+     volatile unsigned f7 : 28;
+ };
+ static struct S1 g_76;
+ static struct S1 g_245 = {0,0,0,0,0,0,0,1};
+ static signed char g_323 = 0x80;
+ static void func_1(void)
+ {
+   g_245.f7 &= 1;
+   for (g_323 = 0; g_323 <= -1; g_323 -= 2) {
+       g_76 = g_76;
+       g_76.f4 ^= 11;
+   }
+ }
+ int main()
+ {
+   func_1();
+   if (g_323 != 0 || g_245.f7 != 1)
+     abort ();
+   return 0;
+ }
Index: gcc/tree-streamer-in.c
===================================================================
*** gcc/tree-streamer-in.c.orig	2012-01-05 15:57:41.000000000 +0100
--- gcc/tree-streamer-in.c	2012-02-06 16:13:43.000000000 +0100
*************** lto_input_ts_field_decl_tree_pointers (s
*** 642,647 ****
--- 642,648 ----
    DECL_BIT_FIELD_TYPE (expr) = stream_read_tree (ib, data_in);
    DECL_QUALIFIER (expr) = stream_read_tree (ib, data_in);
    DECL_FIELD_BIT_OFFSET (expr) = stream_read_tree (ib, data_in);
+   DECL_BIT_FIELD_REPRESENTATIVE (expr) = stream_read_tree (ib, data_in);
    DECL_FCONTEXT (expr) = stream_read_tree (ib, data_in);
  }
  
Index: gcc/tree-streamer-out.c
===================================================================
*** gcc/tree-streamer-out.c.orig	2012-01-09 09:39:54.000000000 +0100
--- gcc/tree-streamer-out.c	2012-02-06 16:13:53.000000000 +0100
*************** write_ts_field_decl_tree_pointers (struc
*** 554,559 ****
--- 554,560 ----
    stream_write_tree (ob, DECL_BIT_FIELD_TYPE (expr), ref_p);
    stream_write_tree (ob, DECL_QUALIFIER (expr), ref_p);
    stream_write_tree (ob, DECL_FIELD_BIT_OFFSET (expr), ref_p);
+   stream_write_tree (ob, DECL_BIT_FIELD_REPRESENTATIVE (expr), ref_p);
    stream_write_tree (ob, DECL_FCONTEXT (expr), ref_p);
  }
  


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]