This is the mail archive of the fortran@gcc.gnu.org mailing list for the GNU Fortran project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: MPI_Abort


Ok, so I'll just submit the trivial patch.

Speaking of which... I think today I've proven that there is no patch so trivial that you can't screw up if you pay little enough attention :-) The patch I just posted earlier was wrong. The attached one should be correct, but I'll wait for the test suite to run its course before I submit it.

Daniel.

On 07/13/2011 03:28 PM, Tobias Burnus wrote:
On 07/13/2011 03:01 PM, Daniel Carrera wrote:
For example, I think it might be nice to try to free the coarrays
before aborting.

Well, one could - but on the other hand, the operating system will clean up ;-)

The idea of a more graceful abort is to send all other images a message
with an "error stop" tag, wait for a few seconds for them to close down.
If it successes, write the error and call MPI_FINALIZE. If they didn't
answer, pull the plug by calling MPI_ABORT.

In order to implement this scheme, one needs the message queue, which we
do not yet have.

Hence, for the moment, I would stick to a simple MPI_abort.


If one wants to do a cleanup, one could free the memory (as in caf_finalize). That will miss those of allocatable arrays, but who cares - after all, one does an error abort. Additionally, the state of the program when doing an error abort might be not the best.

Tobias


--
I'm not overweight, I'm undertall.
Index: libgfortran/caf/single.c
===================================================================
--- libgfortran/caf/single.c	(revision 176230)
+++ libgfortran/caf/single.c	(working copy)
@@ -28,6 +28,7 @@ see the files COPYING3 and COPYING.RUNTI
 #include <stdio.h>  /* For fputs and fprintf.  */
 #include <stdlib.h> /* For exit and malloc.  */
 #include <string.h> /* For memcpy and memset.  */
+#include <stdarg.h> /* For variadic arguments.  */
 
 /* Define GFC_CAF_CHECK to enable run-time checking.  */
 /* #define GFC_CAF_CHECK  1  */
@@ -40,6 +41,21 @@ see the files COPYING3 and COPYING.RUNTI
 caf_static_t *caf_static_list = NULL;
 
 
+/* Keep in sync with mpi.c.  */
+static void
+caf_runtime_error (int error, const char *message, ...)
+{
+  va_list ap;
+  fprintf (stderr, "Fortran runtime error.");
+  va_start (ap, message);
+  fprintf (stderr, message, ap);
+  va_end (ap);
+  fprintf (stderr, "\n");
+
+  /* FIXME: Shutdown the Fortran RTL to flush the buffer.  PR 43849.  */
+  exit (error);
+}
+
 void
 _gfortran_caf_init (int *argc __attribute__ ((unused)),
 		    char ***argv __attribute__ ((unused)),
@@ -73,12 +89,12 @@ _gfortran_caf_register (ptrdiff_t size, 
 
   if (unlikely (local == NULL || token == NULL))
     {
+      const char msg[] = "Failed to allocate coarray";
       if (stat)
 	{
 	  *stat = 1;
 	  if (errmsg_len > 0)
 	    {
-	      const char msg[] = "Failed to allocate coarray";
 	      int len = ((int) sizeof (msg) > errmsg_len) ? errmsg_len
 							  : (int) sizeof (msg);
 	      memcpy (errmsg, msg, len);
@@ -88,10 +104,7 @@ _gfortran_caf_register (ptrdiff_t size, 
 	  return NULL;
 	}
       else
-	{
-	  fprintf (stderr, "ERROR: Failed to allocate coarray");
-	  exit (1);
-	}
+	  caf_runtime_error (1, msg);
     }
 
   if (stat)
Index: libgfortran/caf/mpi.c
===================================================================
--- libgfortran/caf/mpi.c	(revision 176230)
+++ libgfortran/caf/mpi.c	(working copy)
@@ -47,6 +47,7 @@ static int caf_is_finalized;
 caf_static_t *caf_static_list = NULL;
 
 
+/* Keep in sync with single.c.  */
 static void
 caf_runtime_error (int error, const char *message, ...)
 {
@@ -62,7 +63,7 @@ caf_runtime_error (int error, const char
   MPI_Abort (MPI_COMM_WORLD, error);
 
   /* Should be unreachable, but to make sure also call exit.  */
-  exit (2);
+  exit (error);
 }
 
 

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]