This is the mail archive of the
fortran@gcc.gnu.org
mailing list for the GNU Fortran project.
Re: CAF Implementation
- From: "N.M. Maclaren" <nmm1 at cam dot ac dot uk>
- To: "Rouson, Damian" <rouson at sandia dot gov>, "fortran at gcc dot gnu dot org" <fortran at gcc dot gnu dot org>
- Date: 27 Mar 2010 14:59:13 +0000
- Subject: Re: CAF Implementation
- References: <4BAD3D07.2050106@net-b.de> <C7D363E5.F38C%rouson@sandia.gov>
On Mar 27 2010, Rouson, Damian wrote:
I recently saw a talk by Bob Numrich, whom I'm sure many of you know
invented coarrays when he was at Cray in the 1990's. He showed performance
advantages of coarrays over MPI in strong scaling because of the lower
communication overhead associated with coarrays. In discussions after the
talk, I commented that this implied one would not want to use MPI under the
hood to support a coarray syntax. He concurred.
That is a commonly repeated assertion. It was said about co-array Fortran,
before that about OpenMP and HPF, and before that about Cray SHMEM. Plus
others. In every single case, the evidence from the field has shown that
it is so misleading as to be effectively false. MPI has not become dominant
because it is inefficient.
The proviso that you have missed is "special hardware or operating system
support". Without that, as on clusters connected using TCP/IP as a
transport interface, there is no reduction in communication overhead
whatsoever. Throughout its lifetime, new Cray has provide such hardware and
operating system support, and even old Cray did on the relevant machines.
For many decades, Cray has had a bee in its bonnet about one-sided
transfers being more efficient, but almost all non-Cray experience over
that time is that they are no more efficient on a system not specially
designed for one-sided transfers, and they are MUCH harder to use.
You might also like to consider what alternatives to MPI there are for
communication on distributed memory systems. I have mentioned TCP/IP, and
Tobias mentioned GASNet. But what does GASNet use? Guess :-) To a good
first approximation, MPI is by far the most efficient portable transport
specification for distributed memory systems, followed by TCP/IP. But
there really isn't much else, so it isn't surprising :-(
Regards,
Nick Maclaren.