[PATCH/ fortran] PR35423 omp workshare - first patch

Jakub Jelinek jakub@redhat.com
Wed Jul 30 21:43:00 GMT 2008


Hi!

On Sat, Jun 28, 2008 at 12:52:34PM -0500, Vasilis Liaskovitis wrote:
> attached is a first patch implementing parallelization for some
> statements within OMP WORKSHARE constructs.

Sorry for the delay, your mail was accidentally caught by SPAM filtering,
so I only discovered it much later when looking why some mails got lost.

First of all, do you have a Copyright assignment on file?  That's the
precondition of getting the code accepted.

The patch generally looks good, but there are a couple of issues:

1) I don't see why ws_data.prev_singleunit can't be a local variable in
   gfc_trans_omp_workshare, no need to make it a field in the global var
2) ws_data.workshare_clauses is problematic; the gfc_trans_omp_workshare
   argument to gfc_trans_omp_workshare will only contain nowait or nothing,
   so you should just pass that info around as a bool in ws_data, and
   let both OMP_FOR and OMP_SINGLE create their own clauses tree chain
   - it shouldn't be shared anyway and for OMP_SINGLE most of the
   OMP_FOR-ish clauses shouldn't appear.  BTW, no need to add collapse(1)
   clause, that's the default if no collapse clause is present.
3) ws_data as a global variable is problematic with nesting.  Say
!$omp workshare
  a(:) = b(:)
  !$omp parallel
  c(:) = d(:)
  !$omp workshare
  e(:) = f(:)
  !$omp end workshare
  !$omp end parallel
  g(:) = h(:)
!$omp end workshare
   This could be solved e.g. by copying ws_data into a temporary variable,
   clearing it and restoring afterwards around gfc_trans_omp_directive
   for EXEC_OMP_PARALLEL*.
4) EXEC_OMP_BARRIER must not appear in !$omp workshare, so you shouldn't
   handle it there
5) /* barriers and critical regions are executed by each thread */
   ws_data.curr_singleunit = false;
   for EXEC_OMP_CRITICAL is wrong, the critical region is a single unit
   of work and as such executed just by one of the threads.  But
   you want to clear or save/clear/restore some ws_data fields
   around gfc_trans_omp_directive for EXEC_OMP_CRITICAL too
   - the whole critical is single unit of work, so OMP_FOR shouldn't
   be created in it.
6) you only handle EXEC_OMP_PARALLEL in gfc_trans_omp_workshare,
   but you should handle EXEC_OMP_PARALLEL_{DO,SECTIONS,WORKSHARE}
   the same way
7) please watch formatting
   - space between if and (
   - < 80 column lines
   - {} for if are idented 2 more spaces from if, the body another
     2
   - comments should be sentences, starting with capital letter
     and ending with ., followed by 2 spaces, not just one
   - single statement then or else blocks for if don't need {}s around it
8) now that trunk has been tuplified, you'll need to change slightly
   the creation of OMP_FOR (GIMPLE_MODIFY_STMT is gone)

> I have also done some work on the following 2 items, let me know if
> you are interested in a patch:
> - do dependence analysis between statements so that openmp barriers
> are placed only where needed to satisfy dependences. Currently there
> is a barrier at the end of every workshared OMP_FOR loop and at the
> end of every OMP_SINGLE block.

Sure, this is very much desirable, but should be probably handled after
your initial patch is committed.  After the dependence analysis is in,
also estimating how expensive a singleunit is would be desirable too,
and using that together with the dependency analysis to decide what
should be merged into an OMP_SINGLE vs. what should be in different
OMP_SINGLEs (or whether say OMP_SECTIONS should be created instead,
scheduling the various single units of work as separate sections,
as long as there aren't dependencies in between them).
Say EXEC_OMP_PARALLEL* can be always predicted to be expensive and
so should be parallelized separately, scalar assignments are cheap and
can be always merged, etc.

> - workshare array assignments handled with builtin memcpy, memset (see
> http://gcc.gnu.org/ml/gcc/2008-04/msg00232.html)

Depends whether it will be profitable or not.

	Jakub



More information about the Gcc-patches mailing list