This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Refining autopar cost model for outer loops


Hi,

This patch changes the minimum number of iterations of outer loops for the 
runtime check which tests whether it is worthwhile to parallelize the loop 
or not.
The current minimum number of iterations for all loops is MIN_PER_THREAD * 
number of threads, when MIN_PER_THREAD is arbitrarily set to 100.
This prevents some of the promising loops of SPEC2006 from getting 
parallelized.
I changed the minimum bound for outer loops, under the assumption that 
even if there are not enough iterations, the fact that an outer loop 
contains more loops, obtains enough work to get parallelized.
This indeed allowed for a lot more loops to get parallelized, resulting in 
substantial performance improvements for SPEC2006 benchmarks, measured on 
a Power7 6 core, 4 way SMT each.
I compared  the trunk with O3 + autopar (parallelizing with 6 threads) vs. 
the trunk with   O3  minus vectorization.
None of the benchmarks shows any significant degradation.

The speedup shown for  libquatum  with autopar has been obtained with 
previous versions of autopar, having no relation to this patch, but surely 
not degraded by it either.

These are the speedups I collected:

462.libquantum  2.5 X
410.bwaves      3.3 X
436.cactusADM   4.5 X
459.GemsFDTD    1.27 X
481.wrf         1.25 X


Bootstrap and testsuite (with -ftree-parallelize-loops=4) pass 
successfully.
spec-2006 showed no regressions.


OK for trunk?
Thanks,
razya

2012-05-08  Razya Ladelsky  <razya@il.ibm.com>
 
                 * tree-parloops.c (gen_parallel_loop): Change 
many_iterations_cond for outer loops.
 

Attachment: refine_cost_model.txt
Description: Text document


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]