This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/34265] Missed optimizations
- From: "dominiq at lps dot ens dot fr" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: 28 Nov 2007 23:57:17 -0000
- Subject: [Bug tree-optimization/34265] Missed optimizations
- References: <bug-34265-12313@http.gcc.gnu.org/bugzilla/>
- Reply-to: gcc-bugzilla at gcc dot gnu dot org
------- Comment #15 from dominiq at lps dot ens dot fr 2007-11-28 23:57 -------
If I am allowed to be sacarstic too, I'll say that the increase in compile time
(worst case 11%, arithmetic average 5%) is not against the current trend one
can see for instance in
http://www.suse.de/~gcctest/c++bench/polyhedron/polyhedron-summary.txt-1-0.html
for no gain at all on the execution time (see also the thread
http://gcc.gnu.org/ml/fortran/2007-07/msg00276.html).
Now I do expect that there will be never patch commited worst than the
Richard's one!
It came very fast: about one hour after my post.
It did not break anything so far.
It did the optimizations it was supposed to do on the intended code and some
variants, even if it broke the vectorization some other variants and increased
the execution time of kepler by 15%.
At least it comfirmed that the bottleneck for induct was both the loop
unrolling and vectorization. Indeed it remains to understand why vectorization
is no longer applied to codes for which it was before the patch.
To be clear, I think it is a mistake to use the f90 array features on small
vectors, but I have seen it more often than I'ld like. So this is a kind of
optimization that can find its place for real life codes and not only
benchmarks.
--
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34265