This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH GCC][11/13]Annotate partition by its parallelism execution type


On Tue, Jun 20, 2017 at 12:34 PM, Richard Biener
<richard.guenther@gmail.com> wrote:
> On Tue, Jun 20, 2017 at 11:18 AM, Bin.Cheng <amker.cheng@gmail.com> wrote:
>> On Fri, Jun 16, 2017 at 11:10 AM, Richard Biener
>> <richard.guenther@gmail.com> wrote:
>>> On Mon, Jun 12, 2017 at 7:03 PM, Bin Cheng <Bin.Cheng@arm.com> wrote:
>>>> Hi,
>>>> This patch checks and records if partition can be executed in parallel by
>>>> looking if there exists data dependence cycles.  The information is needed
>>>> for distribution because the idea is to distribute parallel type partitions
>>>> away from sequential ones.  I believe current distribution doesn't work
>>>> very well because it does blind distribution/fusion.
>>>> Bootstrap and test on x86_64 and AArch64.  Is it OK?
>>>
>>> +  /* In case of no data dependence.  */
>>> +  if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
>>> +    return false;
>>> +  /* Or the data dependence can be resolved by compilation time alias
>>> +     check.  */
>>> +  else if (!alias_sets_conflict_p (get_alias_set (DR_REF (dr1)),
>>> +                                  get_alias_set (DR_REF (dr2))))
>>> +    return false;
>>>
>>> dependence analysis should use TBAA already, in which cases do you need this?
>>> It seems to fall foul of the easy mistake of not honoring GCCs memory model
>>> as well ... see dr_may_alias_p.
>> I see.  Patch updated with this branch removed.
>>
>>>
>>> +  /* Further check if any data dependence prevents us from executing the
>>> +     partition parallelly.  */
>>> +  EXECUTE_IF_SET_IN_BITMAP (partition->reads, 0, i, bi)
>>> +    {
>>> +      dr1 = (*datarefs_vec)[i];
>>> +      EXECUTE_IF_SET_IN_BITMAP (partition->writes, 0, j, bj)
>>> +       {
>>>
>>> what about write-write dependences?
>>>
>>> +  EXECUTE_IF_SET_IN_BITMAP (partition->reads, 0, i, bi)
>>> +    {
>>> +      dr1 = (*datarefs_vec)[i];
>>> +      EXECUTE_IF_SET_IN_BITMAP (partition->writes, i + 1, j, bj)
>>> +       {
>>> +         dr2 = (*datarefs_vec)[j];
>>> +         /* Partition can only be executed sequentially if there is any
>>> +            data dependence cycle.  */
>>>
>>> exact copy of the loop nest follows?!  Maybe you meant to iterate
>>> over writes in the first loop.
>> Yes, this is a copy-paste typo.  Patch is also simplified because
>> read/write are recorded together now.  Is it OK?
>
> Ok.
Sorry I have to update this patch because one of my mistake.  I didn't
update partition type when fusing them.  For some partition fusion,
the update is necessary otherwise we end up with inaccurate type and
inaccurate fusion later.  Is it Ok?

Thanks,
bin
2017-06-20  Bin Cheng  <bin.cheng@arm.com>

    * tree-loop-distribution.c (enum partition_type): New.
    (struct partition): New field type.
    (partition_merge_into): Add parameter.  Update partition type.
    (data_dep_in_cycle_p, update_type_for_merge): New functions.
    (build_rdg_partition_for_vertex): Compute partition type.
    (rdg_build_partitions): Dump partition type.
    (distribute_loop): Update calls to partition_merge_into.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]