This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Masked bitfield comparisons may yield incorrect results




  In message <199903041003.FAA25336@trinity.ihack.net>you write:
  > A patch to fix this problem is attached.  This patch also generates
  > more efficient code in some of the other cases, by eliminating shifts.
FYI -- always include a ChangeLog entry with patches.  Always send patches to
egcs-patches.


  > Index: fold-const.c
  > ===================================================================
  > RCS file: /cvsroot/src/gnu/dist/gcc/fold-const.c,v
  > retrieving revision 1.2
  > diff -c -2 -r1.2 fold-const.c
  > *** fold-const.c	1999/03/04 05:38:06	1.2
  > --- fold-const.c	1999/03/04 09:44:26
  > ***************
  > *** 3645,3664 ****
  >   
  >         /* Make a mask that corresponds to both fields being compared.
  > ! 	 Do this for both items being compared.  If the masks agree,
  > ! 	 we can do this by masking both and comparing the masked
  > ! 	 results.  */
  >         ll_mask = const_binop (BIT_IOR_EXPR, ll_mask, rl_mask, 0);
  >         lr_mask = const_binop (BIT_IOR_EXPR, lr_mask, rr_mask, 0);
  > !       if (operand_equal_p (ll_mask, lr_mask, 0) && lnbitsize == rnbitsize
  > )
  >   	{
  >   	  lhs = make_bit_field_ref (ll_inner, type, lnbitsize, lnbitpos,
  >   				    ll_unsignedp || rl_unsignedp);
  >   	  rhs = make_bit_field_ref (lr_inner, type, rnbitsize, rnbitpos,
  >   				    lr_unsignedp || rr_unsignedp);
  > ! 	  if (! all_ones_mask_p (ll_mask, lnbitsize))
  > ! 	    {
  > ! 	      lhs = build (BIT_AND_EXPR, type, lhs, ll_mask);
  > ! 	      rhs = build (BIT_AND_EXPR, type, rhs, ll_mask);
  > ! 	    }
  >   	  return build (wanted_code, truth_type, lhs, rhs);
  >   	}
  > --- 3645,3667 ----
  >   
  >         /* Make a mask that corresponds to both fields being compared.
  > ! 	 Do this for both items being compared.  */
  >         ll_mask = const_binop (BIT_IOR_EXPR, ll_mask, rl_mask, 0);
  >         lr_mask = const_binop (BIT_IOR_EXPR, lr_mask, rr_mask, 0);
  > ! 
  > !       /* If the operand size is the same, and the bits being compared are
  > ! 	 in the same position in both operands, we can use the mask to do
  > ! 	 the bitfield extraction, rather than shifting.  */
  > !       if (lnbitsize == rnbitsize && xll_bitpos == xlr_bitpos)
  >   	{
  >   	  lhs = make_bit_field_ref (ll_inner, type, lnbitsize, lnbitpos,
  >   				    ll_unsignedp || rl_unsignedp);
  > + 	  if (! all_ones_mask_p (ll_mask, lnbitsize))
  > + 	    lhs = build (BIT_AND_EXPR, type, lhs, ll_mask);
  > + 
  >   	  rhs = make_bit_field_ref (lr_inner, type, rnbitsize, rnbitpos,
  >   				    lr_unsignedp || rr_unsignedp);
  > ! 	  if (! all_ones_mask_p (lr_mask, rnbitsize))
  > ! 	    rhs = build (BIT_AND_EXPR, type, rhs, lr_mask);
  > ! 
  >   	  return build (wanted_code, truth_type, lhs, rhs);
Let me make sure I understand this change completely.

The only bugfix in this patch is checking that xll_bitpos == xlr_bitpos before
applying the optimization.

It accidentally fixes other problems unrelated to xllbitpos == xlrbitpos by
causing more cases to be handled by the code above rather than the buggy code
shown below:

      /* There is still another way we can do something:  If both pairs of
         fields being compared are adjacent, we may be able to make a wider
         field containing them both.  */
      if ((ll_bitsize + ll_bitpos == rl_bitpos
           && lr_bitsize + lr_bitpos == rr_bitpos)
          || (ll_bitpos == rl_bitpos + rl_bitsize
              && lr_bitpos == rr_bitpos + rr_bitsize))
        return build (wanted_code, truth_type,
                      make_bit_field_ref (ll_inner, type,
                                          ll_bitsize + rl_bitsize,
                                          MIN (ll_bitpos, rl_bitpos),
                                          ll_unsignedp),
                      make_bit_field_ref (lr_inner, type,
                                          lr_bitsize + rr_bitsize,
                                          MIN (lr_bitpos, rr_bitpos),
                                          lr_unsignedp));


In fact, I can get correct results for your combined testcase by adding just
the xllbitpos == xlrbitpos check and disabling the adjacent field comparison
optimization.

We may still want to apply this change since as you note it can convert
some shifts into masking operations.  However, what I would prefer to do is
to apply just the xllbitpos == xlrbitpos fix initially so that we don't hide
any bugs.  Fix the remaining bugs, then go back and install the optimization
portion of this patch since the optimization code is not directly tied to
the bugfix itself.

Does this make sense?

jeff


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]