This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH PR68030/PR69710][RFC]Introduce a simple local CSE interface and use it in vectorizer


On 11/22/2016 08:07 AM, Bin.Cheng wrote:
> On Mon, Nov 21, 2016 at 9:34 PM, Doug Gilmore <Doug.Gilmore@imgtec.com> wrote:
>> I haven't seen any followups to this discussion of Bin's patch to
>> PR68303 and PR69710, the patch submission:
>> http://gcc.gnu.org/ml/gcc-patches/2016-05/msg02000.html
>>
>> Discussion:
>> http://gcc.gnu.org/ml/gcc-patches/2016-07/msg00761.html
>> http://gcc.gnu.org/ml/gcc-patches/2016-06/msg01551.html
>> http://gcc.gnu.org/ml/gcc-patches/2016-06/msg00372.html
>> http://gcc.gnu.org/ml/gcc-patches/2016-06/msg01550.html
>> http://gcc.gnu.org/ml/gcc-patches/2016-05/msg02162.html
>> http://gcc.gnu.org/ml/gcc-patches/2016-05/msg02155.html
>> http://gcc.gnu.org/ml/gcc-patches/2016-05/msg02154.html
>>
>>
>> so I did some investigation to get a better understanding of the
>> issues involved.
> Hi Doug,
> Thanks for looking into this problem.
>>
>> On 07/13/2016 01:59 PM, Jeff Law wrote:
>>> On 05/25/2016 05:22 AM, Bin Cheng wrote:
>>>> Hi, As analyzed in PR68303 and PR69710, vectorizer generates
>>>> duplicated computations in loop's pre-header basic block when
>>>> creating base address for vector reference to the same memory object.
>>> Not a huge surprise.  Loop optimizations generally have a tendency
>>> to create and/or expose CSE opportunities.  Unrolling is a common
>>> culprit, there's certainly the possibility for header duplication,
>>> code motions and IV rewriting to also expose/create redundant code.
>>>
>>> ...
>>>
>>>  But, 1) It
>>>> doesn't fix all the problem on x86_64.  Root cause is computation for
>>>> base address of the first reference is somehow moved outside of
>>>> loop's pre-header, local CSE can't help in this case.
>>> That's a bid odd -- have you investigated why this is outside the loop header?
>>> ...
>> I didn't look at this issue per se, but I did try running DOM between
>> autovectorization and IVS.  Just running DOM had little effect, what
>> was crucial was adding the change Bin mentioned in his original
>> message:
>>
>>     Besides CSE issue, this patch also re-associates address
>>     expressions in vect_create_addr_base_for_vector_ref, specifically,
>>     it splits constant offset and adds it back near the expression
>>     root in IR.  This is necessary because GCC only handles
>>     re-association for commutative operators in CSE.
>>
>> I attached a patch for these changes only.  These are the important
>> modifications that address the some of the IVS related issues exposed
>> by PR68303. I found that adding the CSE change (or calling DOM between
>> autovectorization and IVOPTS) is not needed, and from what I have
> I checked the code again.  As you said, re-association part is important
> to enable CSE opportunities, no matter when and which pass handles it.
> After re-association, the computation of base addresses are like:
> 
>     //preheader
>     b_1 = g_Input + var_offset_1;
>     vectp_1 = b_1 + cst_offset_1;
>     b_2 = g_Input + var_offset_2;
>     vectp_2 = b_2 + cst_offset_2;
>     ...
>     b_n = g_input + var_offset_n;
>     vectp_n = b_n + cst_offset_n;
> 
>     //loop
>     MEM[vectp_1];
>     MEM[vectp_2];
>     ...
>     MEM[vectp_n];
> 
> In fact, var_offset_1, var_offset_2, ..., var_offset_n are equal to others.  So
> the addresses are in the form of "g_Input + var_offset + cst_offset_x" differing
> to each other wrto constant offset.  The purpose of CSE is to propagate all
> parts of this address to IVOPTs, otherwise IVOPTS only knows IVs as below:
> 
>     iv_use_1: {b_1 + cst_offset_1, step}_loop
>     iv_use_1: {b_2 + cst_offset_2, step}_loop
>     ...
>     iv_use_n: {b_n + cst_offset_n, step}_loop
> 
>> seen, actually makes the code worse.
>>
>> Applying only the modifications to
>> vect_create_addr_base_for_vector_ref, additional simplifications will
>> be done when induction variables are found (function
>> find_induction_variables).  These simplications are indicated by the
>> appearance of lines:
>>
>> Applying pattern match.pd:1056, generic-match.c:11865
> This doesn't look related to this problem to me.  The simplification of this
> problem is CSE, it's not what match.pd does.
> 
>>
>> in the IVOPS dump file.  Now IVOPTs transforms the code so that
>> constants now appear in the computation of the effective addresses for
>> the memory OPs.  However the code generated by IVOPTS still uses a
>> separate base register for each memory reference.  Later DOM3
>> transforms the code to use just one base register, which is the form
> 
> Indeed CSE now looks like unnecessary fixing the problem, we can relying on
> DOM pass to explore the equality among new bases (b_1, b_2, ..., b_n).  This
> actually echoes my humble opinion: we shouldn't rely on IVOPTs to fix all bad
> code issues.  On the other handle, for cases in which these bases
> (b_1, b_2, ..., b_n)
> are not equal to each other, there is not much to lose in this way either.
> 
>> the code needs to be in for the preliminary phase of IVOPTs where
>> "IV uses" associated with memory OPs are placed into groups.  At the
>> time of this grouping, checks are done to ensure that for each member
>> of a group the constant offsets don't overflow the immediate fields in
>> actual machine instructions (more in this see * below).
>>
>> Currently it appears that an IV is generated for each memory
>> reference.  Instead of generating a new IV for each memory reference,
>> we could try to detect that value of the new IV is just a constant
>> offset of an existing IV and just generate a new temp reflecting that.
>> I haven't worked through what needs to be done to implement that, but
>> for the issue in PR69710 (saxpy example where the same IV should be
> Basic idea is to re-associate generated base addresses for vectors in order
> to enable more CSE opportunities.  The original patch re-associates all
> vector base addresses, no matter if they share common-sub-expression
> with others or not.  This is not good, and could result in sub-optimal code.
> That's one reason why the patch is not updated.  The old idea here is to
> introduce a vectorizer local fix, for example using hash-table storing existing
> base address, and generating new ones (which differ in const offset) based
> on it.
> 
>> used for a load and store) is straightforward to implement so since
>> work has already been done in during data dependence analysis to
>> detect this situation.  I attached a patch for PR69710 that was
>> bootstrapped and tested on X86_64 without errors.  It does appear that
>> it needs more testing, since I did notice SPEC 2006 h264ref produces
>> different results with the patch applied, which I still need to
>> investigate.
> Your patch reminds me of another possible method.  Like tree-predcom.c
> does, a light-weight merge&find structure, it records groups of data-refs,
> and data-refs in a group having the same base address (except constant
> offset).  We can build such group information during analyzing each ddr.
> This is lighter than has-table with optimal time complexity.
Hi Bin,

Thank you for the quick response.

Isn't this work ready done for us?

See trace output in the "vect" details dump for DR structures:

...
	base_object: *global_Input.0_145 + (sizetype) ((unsigned int) iy_179 * 2064)
	Access function 0: {4144B, +, 4}_2
...
> 
> Above comments are made based on x86, there is one more problem on
> AArch64.  Given address iv_use with the form MEM[base + IV + cst_offset],
> when cst_offset is out the range of [base + offset] addressing mode, IVOPTs
> rewrites it into below form:
> 
>     temp_1 = base + IV;
>     temp_2 = temp_1 + cst_offset;
>     MEM[temp_2];
> 
> What we want is:
> 
>     temp_1 = base + cst_offset;
>     MEM[temp-1 + IV];
> 
> Thus temp_1 can be hoisted out of loop when register pressure allows (as
> in this problem).  In cases register pressure is high, very likely the addition
> is kept in loop, which is still better than now in which register pressure is
> high because different IVs are chosen.  Without this change, the vectorization
> change causes obvious regression in loop body on AArch64.
> 
> But both changes look like stage 1 work.
Right -- Thanks!

Doug
> 
> Thanks,
> bin
> 
>> Doug
>>
>> * Note that when IV uses are grouped, only positive constant offsets
>> constraints are considered.  That negative offsets can be used are
>> reflected in the costs of using a different IV than the IV associated
>> with a particular group.  Thus once the optimal IV set is found, a
>> different IV may chosen, which causes negative constant offsets to be
>> used.
>>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]