Masked bitfield comparisons may yield incorrect results (back from the dead)

Jeffrey A Law law@upchuck.cygnus.com
Fri Mar 26 22:25:00 GMT 1999


  In message <199903101241.HAA22967@zygorthian-space-raiders.mit.edu>you write:
  > 
  > It turns out there's yet *another* bug, which happened to be masked in
  > some cases due to the bugs I already fixed.  Consider this case:
  > 
  > -----8<-----snip-----8<-----snip-----8<-----snip-----8<-----snip-----8<----
  > -
  > struct foo {
  >         int junk;                       /* this is to force alignment */
  >         unsigned char w, x, y, z;
  > };
  > 
  > int
  > foo(a, b)
  >         struct foo *a, *b;
  > {
  > 
  >         return (a->w == b->x && a->x == b->y);
  > }
  > -----8<-----snip-----8<-----snip-----8<-----snip-----8<-----snip-----8<----
  > -
  > 
  > Note that the fields in the RHS of the comparisons are shifted one
  > byte from the fields in the LHS.
  > 
  > For the LHS, lnbitsize is 16, since the w and x are adjacent and
  > aligned nicely.  For the RHS, rnbitsize is 32.  However, because the
  > RHS mask (lr_mask) is generated with the type created from lnbitsize,
  > it is truncated.  This is dangerous.
[ ... ]

  > The following additional patch fixes this problem, though perhaps not
  > in the prettiest way.  The reason for the slightly strange
  > organization is so that the size reduction is done to the fetched
  > values and the masks before they are combined.  At least on the x86,
  > this causes a movw to be generated for the RHS rather than a movzwl,
  > which is faster on some processors.
[ ... ]
Thanks.  Installed.

Note that we _may_ want to make the early size reduction dependent on 
!SLOW_BYTE_ACCESS or something like that.  Consider partial stalls on the
PPro/PII/PIII.

jeff


More information about the Gcc-patches mailing list