[PATCH] Fix another wrong-code bug with -fstrict-volatile-bitfields

Bernd Edlinger bernd.edlinger@hotmail.de
Fri Mar 6 09:29:00 GMT 2015


Hi,

On Thu, 5 Mar 2015 16:36:48, Richard Biener wrote:
>
> On Thu, Mar 5, 2015 at 4:05 PM, Bernd Edlinger
> <bernd.edlinger@hotmail.de> wrote:
>>
>> every access is split in 4 QImode accesses. but that is as
>> expected, because the structure is byte aligned.
>
> No, it is not expected because the CPU can handle unaligned SImode
> reads/writes just fine (even if not as an atomic operation).
> The C++ memory model allows an SImode read to s.y (-fstrict-volatile-bitfields
> would, as well, but the CPU doesn't guarantee atomic operation here)
>

Hmm, well.  I understand.

However this is the normal way how the non-strict-volatile-bitfields
work.

But I can probably work out a patch that enables the strict-volatile-bitfields
to generate un-aligned SImode accesses when necessary on !STRICT_ALIGNMENT
targets.

Now, I checked again the ARM port, and I am a bit surprised, because
with gcc 4.9.0 this structure gets 4 QImode accesses which is correct,
but with recent trunk (gcc version 5.0.0 20150301) I get SImode accesses,
but the structure is not aligned and the compiler cant possibly know how the memory
will be aligned.  Something must have changed in be meantime, but it wasn't by me.
IIRC the field mode in this example was QImode but now it seems to be SImode.


struct s
{
  unsigned int y:31;
} __attribute__((packed));

int
test (volatile struct s* x)
{
  x->y=0x7FFFFFFF;
  return x->y;
}


So what would you think of this change at strict_volatile_bitfield_p?

diff -up expmed.c.jj expmed.c
--- expmed.c.jj    2015-01-16 11:20:40.000000000 +0100
+++ expmed.c    2015-03-06 10:07:14.362383274 +0100
@@ -472,9 +472,9 @@ strict_volatile_bitfield_p (rtx op0, uns
     return false;
 
   /* Check for cases of unaligned fields that must be split.  */
-  if (bitnum % BITS_PER_UNIT + bitsize> modesize
-      || (STRICT_ALIGNMENT
-      && bitnum % GET_MODE_ALIGNMENT (fieldmode) + bitsize> modesize))
+  if (bitnum % (STRICT_ALIGNMENT ? modesize : BITS_PER_UNIT)
+      + bitsize> modesize
+      || (STRICT_ALIGNMENT && MEM_ALIGN (op0) < modesize))
     return false;
 
   /* Check for cases where the C++ memory model applies.  */


of course this is incomplete, and needs special handling for !STRICT_ALIGNMENT
in the strict-volatile-bitfields code path later.



Thanks
Bernd.
 		 	   		  


More information about the Gcc-patches mailing list