This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [patch] Fix PR middle-end/58570
- From: Eric Botcazou <ebotcazou at adacore dot com>
- To: Richard Biener <richard dot guenther at gmail dot com>
- Cc: gcc-patches at gcc dot gnu dot org
- Date: Tue, 08 Oct 2013 19:52:45 +0200
- Subject: Re: [patch] Fix PR middle-end/58570
- Authentication-results: sourceware.org; auth=none
- References: <18178507 dot ChivVPE93z at polaris> <CAFiYyc3ozgPCiUaTi6v1UJ600m4XU889gd=Va73bC8WN9eNPpw at mail dot gmail dot com>
> Probably because the actual accesses may overlap if we choose to
> perform a bigger access.
Nope, simply because they share a byte.
> The same can happen if we for struct { char c1; char c2; } perform
> an HImode access in case the target doesn't support QImode accesses.
> Basically anytime we go through the bitfield expansion path. Thus, doesn't
> that mean that MEM_EXPR is wrong on the MEMs? Maybe we used to
> strip all DECL_BIT_FIELD component-refs at some point (adjusting
> MEM_OFFSET accordingly)?
Yes, we used to strip the MEM_EXPRs as soon as we go through the bitfield
expansion path until last year, when I changed it:
2012-09-14 Eric Botcazou <ebotcazou@adacore.com>
PR rtl-optimization/44194
* calls.c (expand_call): In the PARALLEL case, copy the return value
into pseudos instead of spilling it onto the stack.
* emit-rtl.c (adjust_address_1): Rename ADJUST into ADJUST_ADDRESS and
add new ADJUST_OBJECT parameter.
If ADJUST_OBJECT is set, drop the underlying object if it cannot be
proved that the adjusted memory access is still within its bounds.
(adjust_automodify_address_1): Adjust call to adjust_address_1.
(widen_memory_access): Likewise.
* expmed.c (store_bit_field_1): Call adjust_bitfield_address instead
of adjust_address. Do not drop the underlying object of a MEM.
(store_fixed_bit_field): Likewise.
(extract_bit_field_1): Likewise. Fix oversight in recursion.
(extract_fixed_bit_field): Likewise.
* expr.h (adjust_address_1): Adjust prototype.
(adjust_address): Adjust call to adjust_address_1.
(adjust_address_nv): Likewise.
(adjust_bitfield_address): New macro.
(adjust_bitfield_address_nv): Likewise.
* expr.c (expand_assignment): Handle a PARALLEL in more cases.
(store_expr): Likewise.
(store_field): Likewise.
But this was done carefully, i.e. we still drop the MEM_EXPRs if we cannot
prove that they are still valid. Now the granularity of memory accesses at
the RTL level is the byte so everything is rounded up to byte boundaries,
that's why bitfields sharing a byte need to be dealt with specially.
> Your patch seems to paper over this issue in the wrong place ...
No, it's the proper, albeit conservative, fix in my opinion.
--
Eric Botcazou