This is the mail archive of the mailing list for the GNU Fortran project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Automatic parallelization in 4.3?


> Zdenek Dvorak wrote:
> >> I found one post by Zdenek Dvorak regarding auto-parallelization, but
> >> it doesn't mention Fortran AFAICT.  See
> >>
> > 
> > the autoparallelization we currently develop is language-independent
> > (it runs together with other loop optimizations quite late in the
> > compilation process), so I cannot do anything strictly fortran specific.
> As it came up as question and I wonder myself:
> Your patch (link above) uses:
> + @item -ftree-parallelize-loops=@var{n}
> + Automatically generate parallel code using the primitives from
> + libgomp. Create parallel code for @var{n} threads.
> The question is: Why does one need to specify the number of processors
> at compile time? Using OpenMP this is done at run time via

just to make things simpler initially; with few changed lines, this would be

> > It should be possible to pass some information from fortran frontend
> > (like the fact that the procedures in forall constructs are pure) to the
> > optimizers, which might make it possible to parallelize the
> > corresponding loops (and it could also help other optimizations).
> Do you know how to pass this information?

The simplest way would be to use and set some flag on the call_expr, or
on the associated callgraph edge.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]