This is the mail archive of the fortran@gcc.gnu.org mailing list for the GNU Fortran project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Co-arrays?


I once posted a question on the PETSc mailing list about OpenMP:

Was OpenMP ever considered as an alternative to MPI when
designing/implementing PETSc ?

... and got this response:

"That would have been an epic disaster.

"Fortunately, OpenMP did not exist in any form when PETSc was started.
MPI did not exist either, but given that Barry and Bill were
developing a new package based on a parallel abstraction that did not
exist, at least they had the good sense to use one that _would_ work.

"Even today, OpenMP is basically a toy that provides a "convenient"
way to utilize a very small number of processor cores. It is not
scalable (even to large shared-memory machines) and was not designed
well for nontrivial library development."

On Tue, Jan 29, 2013 at 5:03 PM, Tobias Burnus <burnus@net-b.de> wrote:
> John Chludzinski wrote:
>>
>> One more question on my inventory list:  When will gfortran support
>> co-arrays?
>
>
> I admit it is a bit boring, but is has a nearly complete support for
> coarrays using a single-image, i.e. all coarray programs (which are
> programmed such that num_images() == 1 works) should run.
>
> For proper parallelization (num_images() > 1): I think most basic
> infrastructure is there, i.e. mainly the real communication is missing.
> Probably around one person month work is required to get it working. (Plus
> some more time to refine it.) I don't know whether and how much progress
> there will be during 4.9 development. I think there is some chance that
> progress is made, but I wouldn't hold my breath.
>
>
>> My contractor asked for other avenues using Fortran to improve
>> performance and, of course, this brought up co-arrays.
>
>
> If you are on a single shared-memory node, you could consider OpenMP; that's
> one of the simpler approaches to retrofit parallelization to the code. If
> you are using a distributed-memory machine, using MPI (Message Passing
> Interface) could be currently the better approach - contrary to coarrays it
> is very widely available and well tested. There are also a few other
> possibilities …
>
> Tobias
>
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]