[PATCH] Fix PR rtl-optimization/65067

Eric Botcazou ebotcazou@adacore.com
Fri Mar 6 09:12:00 GMT 2015


> Hmm.  As you also modify the no-strict-volatile-bitfield path I'm not sure
> you don't regress the case where EP_insv can work on memory.  I agree
> that simplifying the strict-volatile-bitfield path to extract the memory
> within strict-volatile-bitfield constraints to a reg and then using the
> regular path is a good thing.
> 
> Eric?

Even if the no-strict-volatile-bitfield path isn't touched, I don't understand

@@ -976,7 +976,7 @@ store_bit_field (rtx str_rtx, unsigned HOST_WIDE_I
       /* Storing any naturally aligned field can be done with a simple
 	 store.  For targets that support fast unaligned memory, any
 	 naturally sized, unit aligned field can be done directly.  */
-      if (simple_mem_bitfield_p (str_rtx, bitsize, bitnum, fieldmode))
+      if (bitsize == GET_MODE_BITSIZE (fieldmode))
 	{
 	  str_rtx = adjust_bitfield_address (str_rtx, fieldmode,
 					     bitnum / BITS_PER_UNIT);


     {
-      rtx result;
-
       /* Extraction of a full MODE1 value can be done with a load as long as
 	 the field is on a byte boundary and is sufficiently aligned.  */
-      if (simple_mem_bitfield_p (str_rtx, bitsize, bitnum, mode1))
-	result = adjust_bitfield_address (str_rtx, mode1,
-					  bitnum / BITS_PER_UNIT);
-      else
+      if (bitsize == GET_MODE_BITSIZE(mode1))
 	{
-	  str_rtx = narrow_bit_field_mem (str_rtx, mode1, bitsize, bitnum,
-					  &bitnum);
-	  result = extract_fixed_bit_field_1 (mode, str_rtx, bitsize, bitnum,
-					      target, unsignedp);
+	  rtx result = adjust_bitfield_address (str_rtx, mode1,
+						bitnum / BITS_PER_UNIT);


adjust_bitfield_address takes (bitnum / BITS_PER_UNIT) so don't you need to 
make sure that bitnum % BITS_PER_UNIT == 0?

In any case, the comments are now totally out of sync.

-- 
Eric Botcazou



More information about the Gcc-patches mailing list