This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] Use OMP_RETURN_NOWAIT in tree-parloops.c


Hi!

While looking at PR36106, I've noticed that tree-parloops.c
doesn't ever make the created #pragma omp for nowait.  While when
the parallelized loop doesn't have reductions remove_exit_barrier
will most likely optimize that barrier out, if there are reductions it
won't, as there is code in between OMP_FOR's OMP_RETURN and OMP_PARALLEL's
OMP_RETURN.
But IMNSHO there is no point in a barrier after the loop, like normal
reductions the reduction code is just doing #pragma omp atomic addition
to a shared variable, adding a private var to it.  autopar testsuite
coverage is very limited, but this patch certainly doesn't break
any of the autopar.exp tests (which are all compile time though)
nor libgomp.c/autopar-1.c from the PR36106 patch.

Ok for trunk?

2008-05-06  Jakub Jelinek  <jakub@redhat.com>

	* tree-parloops.c (create_parallel_loop): Set OMP_RETURN_NOWAIT
	on OMP_RETURN for OMP_FOR.

--- gcc/tree-parloops.c.jj	2008-04-25 21:49:50.000000000 +0200
+++ gcc/tree-parloops.c	2008-05-05 22:54:49.000000000 +0200
@@ -1,5 +1,5 @@
 /* Loop autoparallelization.
-   Copyright (C) 2006, 2007 Free Software Foundation, Inc.
+   Copyright (C) 2006, 2007, 2008 Free Software Foundation, Inc.
    Contributed by Sebastian Pop <pop@cri.ensmp.fr> and
    Zdenek Dvorak <dvorakz@suse.cz>.
 
@@ -1639,7 +1639,9 @@ create_parallel_loop (struct loop *loop,
 
   /* Emit OMP_RETURN for OMP_FOR.  */
   bsi = bsi_last (ex_bb);
-  bsi_insert_after (&bsi, make_node (OMP_RETURN), BSI_NEW_STMT);
+  t = make_node (OMP_RETURN);
+  OMP_RETURN_NOWAIT (t) = 1;
+  bsi_insert_after (&bsi, t, BSI_NEW_STMT);
 
   return paral_bb;
 }


	Jakub


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]