This is the mail archive of the fortran@gcc.gnu.org mailing list for the GNU Fortran project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Coarray article for the upcoming GCC Summit.


On Sat, Apr 19, 2008 at 02:57:26PM +0200, Toon Moene wrote:
>Attached you'll find the gzip'd draft of my paper on coarray Fortran in GCC
>for the upcoming GCC Summit (see http://www.gccsummit.org).
>
>It has become a somewhat difficult balance between precision (on the exact
>places and code that have to change in the Fortran front end and run time
>library) and sufficient abstraction to keep the paper within length and
>readable.
>
>I would appreciate everyones' comments and suggestions for improvement.

Just minor spelling suggestions.

1. "They will do it by run-
ning multiple jobs in parallel, faster."
For improved readability, i'd say
.. in parallel which is usually faster.

1. "We also use the concept of direct addressibility;"
s/addressibility/addressability/g

5.2.2. "It has to be augmented
by code to determine which part of an coarray is local to
the image so that it an be allocated."

s/an coarray/a coarray/
s/it an be/it can be/;# missing 'c'

5.2.3. "but it must capture new, illegal,
forms of initialization:"

Surplus comma after "illegal", i think.

8. "The author is also indebted to the others on the GNU
Fortran Team, for providing Fortran using professionals"

I do not understand what "using professionals" means here.
Do you mean "Fortran-using", perhaps?

PS: Using MPI as the underlying mechanism sounds like a good approach
to me, fwiw. I don't know how the aforementioned GASNet produced their
performance-data, but nowadays i'd rather expect something like
   Lat   :   1k      2k      4k      8k     16k     32k     64k    128k
  3.20us  151.019 248.264 366.699 480.153 539.654 693.366 809.178 902.439
  3.19us  151.072 247.335 366.699 477.683 539.823 693.191 809.202 902.349
  3.20us  151.231 249.490 366.387 480.489 539.103 693.191 809.083 902.394
  3.20us  150.859 247.977 366.699 478.815 539.484 693.366 809.178 902.439
  3.19us  151.285 247.335 366.699 479.684 538.765 693.086 809.202 902.349
  3.20us  151.285 247.335 365.996 480.421 539.654 692.946 809.202 902.439
  3.20us  150.806 247.620 365.063 479.349 539.272 693.366 808.988 902.349
for application-level performance via MPI (infiniband numbers).


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]