This is the mail archive of the fortran@gcc.gnu.org mailing list for the GNU Fortran project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Coarray article for the upcoming GCC Summit.


Bill Long wrote:
MOENE Toon wrote:
Is there some difficulty with coarrays here that I am overlooking ?
The Rice University group has had two problems in this area, though neither affect our (Cray's) implementation. As background, our general implementation of coarrays on our vector systems works like this: Coarrays are placed in a separate "symmetric" heap that starts at the same base address on each image and contains only coarrays. Because of the restrictions on allocatable coarrays, it is always possible to store coarrays such that the base address for a particular coarray is the same on each image. This allows you to know the address of a remote coarray reference using only the local address information for the same coarray. For ordinary and allocatable coarrays this is pretty straightforward, and Rice seems to have no problem addressing static coarrays.


The second problem seems to have multiple names, one of which is "pinning of memory" on the images. Even if you handle the symmetric heap in some special way, the targets of pointer components and the actual memory for allocatable components can be anywhere in the local memory of each node. Some hardware DMA protocols evidently require that remotely accessed memory has to be "registered" or "pinned" somehow so the hardware in the network can access it. The Cray vector implementation gets around this issue by doing two things: 1) we disable demand paging on any node running a coarray (or UPC) image, and 2) we (effectively) pin/register all of the physical memory on the node by using large pages and remote address translation tables. This results in very good performance, but is more restrictive than a generic implementation. Considering that libraries like MPI need to get around this same issue, I assume gfortran will have some solution available. But, I think it is important to be aware of it from the start, and think about the best solution when doing the basic design work.

This could be tricky, if we want something portable, performant and robust (pick one, ha ha). Here's one article ranting about RDMA that got quite a lot of press a few years ago:


http://www.hpcwire.com/hpc/815242.html

and the response

http://www.hpcwire.com/hpc/885757.html

I think the only sensible solution here would be to use some appropriate abstraction layer like gasnet or armci. Do these also solve the first problem you mention?

Is Cray planning to help out with coarray gfortran on portals? I suppose most gfortran contributors have experience with programming and using MPI applications, but MPP systems programming is somewhat outside our experience. So I think any help in this are would be very welcome.

--
Janne Blomqvist


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]