This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Escape the unnecessary re-optimization in automatic parallelization.


Quoting Li Feng <nemokingdom@gmail.com>:
So my question is,

1. Is this necessary/correct if we want to escape the re-optimization for the
first few passes before tree-parloop.c and continue the optimization passes
after it for the function fun.loop_f0, there must be compile time savings if we
do this in my opinion.

Note that the process of parallelization adds new code, and make pre-existing code makes code sub-optimal - e.g. it transforms the loop into a normal form
where there is only one induction variable.


So, even if you have a homogeneous sets of cores that you are targeting, it
makes sense in principle to re-start optimizations from the beginning.
If the re-running of each individual optimization pass makes sense will depend
on what that pass exactly does and how that relates to parallelized loop and
the target.

However, parallelization is done to differetn target architectures, then
re-running the optimization becomes more improtant, since different parameters
and heuristics can come into play.

Moreover, the set of optimization passes that run before parallelization is
subject to change as GCC evolves.

Therefore, the most effective way to address the issue of running redundant
optimization passes in the context is probably to put it in the wider context
of the work to allow external plugins to influence the pass sequence that is
being applied, and to control this with machine learning.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]