[patch, ARM] Fix PR48250, adjust DImode reload address legitimizing

Richard Earnshaw rearnsha@arm.com
Thu Mar 31 18:25:00 GMT 2011


On Thu, 2011-03-31 at 22:18 +0800, Chung-Lin Tang wrote:
> On 2011/3/31 06:14 PM, Richard Earnshaw wrote:
> > 
> > On Thu, 2011-03-31 at 11:33 +0800, Chung-Lin Tang wrote:
> >> On 2011/3/30 05:28 PM, Richard Earnshaw wrote:
> >>>
> >>> On Wed, 2011-03-30 at 15:35 +0800, Chung-Lin Tang wrote:
> >>>> On 2011/3/30 上午 12:23, Richard Earnshaw wrote:
> >>>>>
> >>>>> On Tue, 2011-03-29 at 22:53 +0800, Chung-Lin Tang wrote:
> >>>>>> On 2011/3/29 下午 10:26, Richard Earnshaw wrote:
> >>>>>>> On Tue, 2011-03-29 at 18:25 +0800, Chung-Lin Tang wrote:
> >>>>>>>> On 2011/3/24 06:51 PM, Richard Earnshaw wrote:
> >>>>>>>>>
> >>>>>>>>> On Thu, 2011-03-24 at 12:56 +0900, Chung-Lin Tang wrote:
> >>>>>>>>>> Hi,
> >>>>>>>>>> PR48250 happens under TARGET_NEON, where DImode is included within the
> >>>>>>>>>> valid NEON modes. This turns the range of legitimate constant indexes to
> >>>>>>>>>> step-4 (coproc load/store), thus arm_legitimize_reload_address() when
> >>>>>>>>>> trying to decompose the [reg+index] reload address into
> >>>>>>>>>> [(reg+index_high)+index_low], can cause an ICE later when 'index_low'
> >>>>>>>>>> part is not aligned to 4.
> >>>>>>>>>>
> >>>>>>>>>> I'm not sure why the current DImode index is computed as:
> >>>>>>>>>> low = ((val & 0xf) ^ 0x8) - 0x8;  the sign-extending into negative
> >>>>>>>>>> values, then subtracting back, actually creates further off indexes.
> >>>>>>>>>> e.g. in the supplied testcase, [sp+13] was turned into [(sp+16)-3].
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Hysterical Raisins... the code there was clearly written for the days
> >>>>>>>>> before we had LDRD in the architecture.  At that time the most efficient
> >>>>>>>>> way to load a 64-bit object was to use the LDM{ia,ib,da,db}
> >>>>>>>>> instructions.  The computation here was (I think), intended to try and
> >>>>>>>>> make the most efficient use of an add/sub instruction followed by
> >>>>>>>>> LDM/STM offsetting.  At that time the architecture had no unaligned
> >>>>>>>>> access either, so dealing with 64-bit that were less than 32-bit aligned
> >>>>>>>>> (in those days 32-bit was the maximum alignment) probably wasn't
> >>>>>>>>> considered, or couldn't even get through to reload.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> I see it now. The code in output_move_double() returning assembly for
> >>>>>>>> ldm/stm(db/da/ib) for offsets -8/-4/+4 seems to confirm this.
> >>>>>>>>
> >>>>>>>> I have changed the patch to let the new code handle the TARGET_LDRD case
> >>>>>>>> only.  The pre-LDRD case is still handled by the original code, with an
> >>>>>>>> additional & ~0x3 for aligning the offset to 4.
> >>>>>>>>
> >>>>>>>> I've also added a comment for the pre-TARGET_LDRD case. Please see if
> >>>>>>>> the description is accurate enough.
> >>>>>>>>
> >>>>>>>>>> My patch changes the index decomposing to a more straightforward way; it
> >>>>>>>>>> also sort of outlines the way the other reload address indexes are
> >>>>>>>>>> broken by using and-masks, is not the most effective.  The address is
> >>>>>>>>>> computed by addition, subtracting away the parts to obtain low+high
> >>>>>>>>>> should be the optimal way of giving the largest computable index range.
> >>>>>>>>>>
> >>>>>>>>>> I have included a few Thumb-2 bits in the patch; I know currently
> >>>>>>>>>> arm_legitimize_reload_address() is only used under TARGET_ARM, but I
> >>>>>>>>>> guess it might eventually be turned into TARGET_32BIT.
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> I think this needs to be looked at carefully on ARMv4/ARMv4T to check
> >>>>>>>>> that it doesn't cause regressions there when we don't have LDRD in the
> >>>>>>>>> instruction set.
> >>>>>>>>
> >>>>>>>> I'll be testing the modified patch under an ARMv4/ARMv4T configuration.
> >>>>>>>> Okay for trunk if no regressions?
> >>>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> Chung-Lin
> >>>>>>>>
> >>>>>>>> 	PR target/48250
> >>>>>>>> 	* config/arm/arm.c (arm_legitimize_reload_address): Adjust
> >>>>>>>> 	DImode constant index decomposing under TARGET_LDRD. Clear
> >>>>>>>> 	lower two bits for NEON, Thumb-2, and !TARGET_LDRD. Add
> >>>>>>>> 	comment for !TARGET_LDRD case.
> >>>>>>>
> >>>>>>> This looks technically correct, but I can't help feeling that trying to
> >>>>>>> deal with the bottom two bits being non-zero is not really worthwhile.
> >>>>>>> This hook is an optimization that's intended to generate better code
> >>>>>>> than the default mechanisms that reload provides.  It is allowed to
> >>>>>>> fail, but it must say so if it does (by returning false).
> >>>>>>>
> >>>>>>> What this hook is trying to do for DImode is to take an address of the
> >>>>>>> form (reg + TOO_BIG_CONST) and turn it into two instructions that are
> >>>>>>> legitimate:
> >>>>>>>
> >>>>>>> 	tmp = (REG + LEGAL_BIG_CONST)
> >>>>>>> 	some_use_of (mem (tmp + SMALL_LEGAL_CONST))
> >>>>>>>
> >>>>>>> The idea behind the optimization is that LEGAL_BIG_CONST will need fewer
> >>>>>>> instructions to generate than TOO_BIG_CONST.  It's unlikely (probably
> >>>>>>> impossible in ARM state) that pushing the bottom two bits of the address
> >>>>>>> into LEGAL_BIG_CONST part of the offset is going to lead to a better
> >>>>>>> code sequence.  
> >>>>>>>
> >>>>>>> So I think it would be more sensible to just return false if we have a
> >>>>>>> DImode address with the bottom 2 bits non-zero and then let the normal
> >>>>>>> reload recovery mechanism take over.
> >>>>>>
> >>>>>> It is supposed to provide better utilization of the full range of
> >>>>>> LEGAL_BIG_CONST+SMALL_LEGAL_CONST.  I am not sure, but I guess reload
> >>>>>> will resolve it with the reg+LEGAL_BIG_CONST part only, using only (mem
> >>>>>> (reg)) for the load/store (correct me if I'm wrong).
> >>>>>>
> >>>>>> Also, the new code slighty improves the reloading, for example currently
> >>>>>> [reg+64] is broken into [(reg+72)-8], creating an additional unneeded
> >>>>>> reload, which is certainly not good when we have ldrd/strd available.
> >>>>>>
> >>>>>
> >>>>> Sorry, didn't mean to suggest that we don't want to do something better
> >>>>> for addresses that are a multiple of 4, just that for addresses that
> >>>>> aren't (at least) word-aligned that we should just bail as the code in
> >>>>> that case won't benefit from the optimization.  So something like
> >>>>>
> >>>>>        if (mode == DImode || (mode == DFmode && TARGET_SOFT_FLOAT))
> >>>>> 	 {
> >>>>> 	    if (val & 3)
> >>>>> 		return false;  /* No point in trying to handle this.  */
> >>>>> 	    ... /* Cases that are useful to handle */
> >>>>
> >>>> I've looked at the reload code surrounding the call to
> >>>> LEGITIMIZE_RELOAD_ADDRESS. It looks like for ARM, reload transforms the
> >>>> address from [reg+#const] to newreg=#const; [reg+newreg]. ARM/Thumb-2
> >>>> has 16-bits to move that constant, which is much more wider in range
> >>>> than a 12-bit constant operand + 8-bit index. So I agree that simply
> >>>> bailing out should be okay.
> >>>>
> >>>> OTOH, I'll still add that, for some micro-architectures, register read
> >>>> ports may be a critical resource; for those cores, handling as many
> >>>> reloads here as possible by breaking into an address add is still
> >>>> slightly better than a 'move + [reg+reg]', for the latter load/store
> >>>> uses one more register read.  So maybe the best should be, to handle
> >>>> when the 'high' part is a valid add-immediate-operand, and bail out if
> >>>> not...
> >>>>
> >>>> C.L.
> >>>
> >>> If the address is unaligned, then the access is going to be slow anyway;
> >>> but this is all corner case stuff - the vast majority of accesses will
> >>> be at natural alignment.  I think it's better to seek clarity in these
> >>> cases than outright performance in theoretical micro-architectural
> >>> corner cases.
> >>>
> >>> The largest number of read ports would be needed by store[reg+reg].
> >>> That's only 3 ports for a normal store (four for a pair of registers),
> >>> but cores can normally handle this without penalty by reading the
> >>> address registers in one cycle and the data to be stored in a later
> >>> cycle -- critical paths tend to be on address generation, not data to be
> >>> stored.
> >>
> >> Actually, I was thinking of cores with dual-issue, where an additional
> >> port read may prevent it from happening...
> >>
> >> Anyways, here's a new patch. I've removed the unaligned handling bits as
> >> you suggested, simply returning false for those cases.
> >>
> >> The points above did inspire another improvement, I think. I have added
> >> a test to also return false when the high part is not a valid immediate
> >> operand.  The rationale is, after such a reg=reg+high address compute is
> >> created, it will still have to be split into multiple adds later, so it
> >> may be better to let reload turn it into the [reg+reg] form.
> >>
> > 
> > Hmm, I think you've missed the point with some of this, which is that
> > not only is it generally more efficient to try and use offset addressing
> > but careful selection of the immediate values used in the load and the
> > ADD insns can often also lead to better reload CSE.  For example:
> > 
> > 	ldr	r0, [r2, #4100]  // Offset too large
> > 	ldr	r1, [r2, #4104]  // Offset too large
> > 
> > is best reloaded as
> > 	add	t1, r2, #4096
> > 	ldr	r0, [t1, #4]
> > 	add	t2, r2, #4096
> > 	ldr	r1, [t2, #8]
> > 
> > which of course post-reload CSE can simplify in most cases to eliminate
> > the second add instruction:
> > 
> > 	add	t1, r2, #4096
> > 	ldr	r0, [t1, #4]
> > 	ldr	r1, [t1, #8]
> > 
> > This is true even if the amount of the offset being split out is larger
> > than a simple legitimate_constant.
> > 
> > The idea here is that we want to split out the bits of the constant as a
> > mask rather than as subtracting the maximum offset that ldr can handle
> > (the same principle applies to LDRD too).
> > 
> > A further trick is that we can make use of negative offsets even if the
> > overall offset is positive and that sometimes this may lead to an
> > immediate that can be constructed with one fewer add instructions  For
> > example,
> > 
> > 	ldr	r0, [r2, #0x3FFFFC]
> > 
> > This is best reloaded as:
> > 
> > 	add	t1, r2, #0x400000
> > 	ldr	r0, [t1, #-4]
> > 
> > The trick for spotting this for a load instruction with N bits of offset
> > (ie bits N-1:0) is to look at bit N: if it is set, then chose a negative
> > offset that is going to make bit N and all the bits below it come to
> > zero in the remainder part.
> > 
> > The final thing to note is that offsets for negative values in Thumb2
> > are asymmetrical from the positive values that are available.  That
> > makes selecting the best offset more complicated, and thus using
> > negative values is less likely to be worth while.
> 
> Richard, thanks for the detailed explanation, you should really turn
> this into a comment in arm.c some time.
> 

Feel free to put it in...

> I have revised the patch again, this time mostly in an analogous form
> following the other cases, as you explained above.
> 
> Thanks,
> Chung-Lin

Except that the existing ARM cases aren't quite doing the best they can
(in fact, the only case that does fully exploit the negative offsetting
cases is the pre-v5 DImode, but that uses 2's complement addressing so
it's fairly easy :-)

The correct calculation for low for the TARGET_LDRD case should be

#define SIGN_MAG_LOW_ADDR_BITS(VAL, N) \
  (((VAL) & ((1 << (N)) - 1)) \
   ? (((VAL) & ((1 << ((N) + 1)) - 1)) ^ (1 << (N))) - (1 << (N)) : 0)

Now you can simply write

	low = SIGN_MAG_LOW_ADDR_BITS (val, 8);


Really the other cases that use sign-magnitude addressing should be
updated similarly.

R.




More information about the Gcc-patches mailing list