This is the mail archive of the fortran@gcc.gnu.org mailing list for the GNU Fortran project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Automatic parallelization in 4.3?


Hello,

> Zdenek Dvorak wrote:
> >> I found one post by Zdenek Dvorak regarding auto-parallelization, but
> >> it doesn't mention Fortran AFAICT.  See
> >> http://gcc.gnu.org/ml/gcc-patches/2006-09/msg01243.html
> > 
> > the autoparallelization we currently develop is language-independent
> > (it runs together with other loop optimizations quite late in the
> > compilation process), so I cannot do anything strictly fortran specific.
> 
> As it came up as question and I wonder myself:
> 
> Your patch (link above) uses:
> 
> + @item -ftree-parallelize-loops=@var{n}
> + Automatically generate parallel code using the primitives from
> + libgomp. Create parallel code for @var{n} threads.
> 
> The question is: Why does one need to specify the number of processors
> at compile time? Using OpenMP this is done at run time via
> OMP_NUM_THREADS.

just to make things simpler initially; with few changed lines, this would be
possible.

> > It should be possible to pass some information from fortran frontend
> > (like the fact that the procedures in forall constructs are pure) to the
> > optimizers, which might make it possible to parallelize the
> > corresponding loops (and it could also help other optimizations).
> 
> Do you know how to pass this information?

The simplest way would be to use and set some flag on the call_expr, or
on the associated callgraph edge.

Zdenek


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]