This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Autoparallelization


Zdenek Dvorak wrote on 10/04/06 11:22:

> in the IL, there is nothing really missing.  Only a massive rewrite 
> of omp-low.c would be necessary, to make it all work on SSA and 
> probably also preserve loop structures.
> 
I doubt that you will need a *massive* rewrite.  OTOH, Richard and I
overhauled it at least 3 or 4 times during development, so it's not as
bad as it seems.  Yes, there will be work to be done to support SSA, and
I am not sure how much painful will that be.  That's the one thing I
specifically set aside to be dealt with when the autoparallelizer was
implemented.

> Why do you prefer this to just sharing the parts that are used by 
> both omp lowering and parallelization?
> 
When I designed the OMP IL, my main goal was to introduce a set of
language constructs that would allow us to express parallelism so that
we could use it for both explicit and implicit parallel implementations.

Having a parallel IL, means that we have specific semantics for each
operation, they can be analyzed and transformed without getting
entangled into operations that may be spread over 2 or 3 libcalls.  So,
when you see OMP_PARALLEL/OMP_RETURN, you know that is a parallel
region.  So your flowgraph will be different, there will be additional
edges because data and control flow behave differently in a parallel
region, etc.  This is a lot more difficult to divine if you have the
parallel region already lowered into whatever library code your runtime
exposes.

The other advantage of targetting IL is that you have a single handoff
point for low-level code generation.  If in the future we decide to
change the implementation of our runtime system, we only need to change
code in one place.

Additionally, one of my long term goals has always been to incorporate
concurrent SSA analyses and transformations to parallel code.  Even when
generating OMP code from OpenMP programs, we have found some
opportunities where additional analysis may help improve the code by
removing superfluous synchronization or moving into private memory
variables that were initially marked shared.

Given the analysis you need to do to parallelize sequential code, I
don't think this would be of great benefit.  But there may be
opportunities for further optimization.

OMP expansion is fairly quick, so I doubt you will see a significant
slowdown by generating OMP GIMPLE instead of the low level library code.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]