[RFC, PR66873] Use graphite for parloops

Tom de Vries Tom_deVries@mentor.com
Thu Jul 16 11:41:00 GMT 2015


On 16/07/15 12:23, Richard Biener wrote:
> On Thu, Jul 16, 2015 at 12:19 PM, Thomas Schwinge
> <thomas@codesourcery.com> wrote:
>> Hi Tom!
>>
>> On Thu, 16 Jul 2015 10:46:00 +0200, Richard Biener <richard.guenther@gmail.com> wrote:
>>> On Wed, Jul 15, 2015 at 10:26 PM, Tom de Vries <Tom_deVries@mentor.com> wrote:
>>>> I tried to parallelize this fortran test-case (based on autopar/outer-1.c),
>>>> [...]
>>
>>>> So I wondered, why not always use the graphite dependency analysis in
>>>> parloops. (Of course you could use -floop-parallelize-all, but that also
>>>> changes the heuristic). So I wrote a patch for parloops to use graphite
>>>> dependency analysis by default (so without -floop-parallelize-all), but
>>>> while testing found out that all the reduction test-cases started failing
>>>> because the modifications graphite makes to the code messes up the parloops
>>>> reduction analysis.
>>>>
>>>> Then I came up with this patch, which:
>>>> - first runs a parloops pass, restricted to reduction loops only,
>>>> - then runs graphite dependency analysis
>>>> - followed by a normal parloops pass run.
>>>>
>>>> This way, we get to both:
>>>> - compile the reduction testcases as before, and
>>>> - profit from the better graphite dependency analysis otherwise.
>>
>>> graphite dependence analysis is too slow to be enabled unconditionally.
>>> (read: hours in some simple cases - see bugzilla)
>>
>> Haha, "cool"!  ;-)
>>
>> Maybe it is still reasonable to use graphite to analyze the code inside
>> OpenACC kernels regions -- maybe such code can reasonably be expected to
>> not have the properties that make its analysis lengthy?  So, Tom, could
>> you please identify and check such PRs, to get an understanding of what
>> these properties are?
>
> Like the one in PR62113 or 53852 or 59121.

PR62113 and PR59121 do not reproduce for me on trunk.

PR53852 does reproduce for me (to the point that I had to reset my laptop).

Thanks,
- Tom



More information about the Gcc-patches mailing list