This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC, PR66873] Use graphite for parloops


Hi Tom!

On Thu, 16 Jul 2015 10:46:00 +0200, Richard Biener <richard.guenther@gmail.com> wrote:
> On Wed, Jul 15, 2015 at 10:26 PM, Tom de Vries <Tom_deVries@mentor.com> wrote:
> > I tried to parallelize this fortran test-case (based on autopar/outer-1.c),
> > [...]

> > So I wondered, why not always use the graphite dependency analysis in
> > parloops. (Of course you could use -floop-parallelize-all, but that also
> > changes the heuristic). So I wrote a patch for parloops to use graphite
> > dependency analysis by default (so without -floop-parallelize-all), but
> > while testing found out that all the reduction test-cases started failing
> > because the modifications graphite makes to the code messes up the parloops
> > reduction analysis.
> >
> > Then I came up with this patch, which:
> > - first runs a parloops pass, restricted to reduction loops only,
> > - then runs graphite dependency analysis
> > - followed by a normal parloops pass run.
> >
> > This way, we get to both:
> > - compile the reduction testcases as before, and
> > - profit from the better graphite dependency analysis otherwise.

> graphite dependence analysis is too slow to be enabled unconditionally.
> (read: hours in some simple cases - see bugzilla)

Haha, "cool"!  ;-)

Maybe it is still reasonable to use graphite to analyze the code inside
OpenACC kernels regions -- maybe such code can reasonably be expected to
not have the properties that make its analysis lengthy?  So, Tom, could
you please identify and check such PRs, to get an understanding of what
these properties are?


GrÃÃe,
 Thomas

Attachment: signature.asc
Description: PGP signature


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]