Created attachment 27792 [details]
Convolution example C file, pre-processed version, build log, assembler output
The classic convolution algorithm (as implemented in GraphicsMagick) is observed to run 2X slower with -mfpmath=sse than with -mfpmath=387. Unfortunately -mfpmath=sse is the default for -m64 builds on AMD_64 so this has large impact for users.
Even with -mfpmath=387 other compilers (LLVM, Open64, and Oracle Studio) produce faster code by default so some of these compilers are producing up to 3X better overall run-time performance and all of them are at least 2X faster than the GCC default for x86-64.
This issue has been verified under Solaris 10, OpenIndiana, and Ubuntu Linux on Opteron and several modern Xeon CPUs.
Please note that AMD Opteron 6200 family CPUs were not observed to suffer from this issue.
Created attachment 27793 [details]
Created attachment 27794 [details]
Sample portable source file
Created attachment 27795 [details]
Created attachment 27796 [details]
Generated assembler code
Please note that while I mentioned GCC 4.6.2, the same problem is also observed with GCC 4.7.1.
Created attachment 27797 [details]
Pre-processed GraphicsMagick source (effect.c).
In case the small sample (which only illustrates the core algorithm) does not satisfy, I have attached a pre-processed version of the real GraphicsMagick code with the performance issue. Look for ConvolveImage().
What options do you use besides -march=corei7-avx? The build-log does not tell.
Did you try -march=corei7 instead of -march=corei7-avx?
I used -march=native in this case. It is interesting that this enabled AVX (this particular CPU does support it).
To be clear, the problem also occurs with
-m64 -mtune=generic -march=x86-64 -mfpmath=sse
-m64 -mtune=generic -march=x86-64 -mfpmath=387
and is also observed on a 5-year old Opteron.
With GCC 4.7.1, and for a specific application benchmark case and with generic architecture and tuning, -mfpmath=387 produces 0.133 iter/s and -mfpmath=sse produces 0.047 iter/s. A different (non-GCC) compiler on the same system produces 0.155 iter/s.
In the course of testing, I have indeed tried -march=corei7 and it did not provide an improvement.
(In reply to comment #8)
> I used -march=native in this case. It is interesting that this enabled AVX
> (this particular CPU does support it).
> To be clear, the problem also occurs with
> -m64 -mtune=generic -march=x86-64 -mfpmath=sse
> -m64 -mtune=generic -march=x86-64 -mfpmath=387
> and is also observed on a 5-year old Opteron.
> With GCC 4.7.1, and for a specific application benchmark case and with generic
> architecture and tuning, -mfpmath=387 produces 0.133 iter/s and -mfpmath=sse
> produces 0.047 iter/s. A different (non-GCC) compiler on the same system
> produces 0.155 iter/s.
> In the course of testing, I have indeed tried -march=corei7 and it did not
> provide an improvement.
What kind of optimization options are you using? -O3? Or are you really
using -O0 (aka nothing)?
This particular application test was done with these options (i.e. -O2):
-m64 -mtune=generic -march=x86-64 -mfpmath=387 -O2
I have also tried -O3, with no positive benefit.
The Autoconf default is -O2 so that is what I generally test/tune the software with. It is pretty rare to see additional benefit from -O3, although with some versions of GCC I have seen application crashes due to wrong code from the tree vectorizer.
I just verified that -O3 produces similar timings to -O2 for both -mfpmath=387 and -mfpmath=sse
I tried it at "-O2" and got low performance with -mfpmath=sse. It looks like it is caused by register dependency (%xmm0) between:
addss %xmm0, %xmm1
cvtsi2ss %eax, %xmm0
Renaming %xmm0 in cvtsi2ss to another free register in all such cases resolves the issue.
Also you can try "-O2 -funroll-loops", which made "sse" code even faster and
and "-O2 -fschedule-insns" which significantly reduced performance loses in "sse" case.
You can also try -frename-registers
-m64 -mtune=generic -march=x86-64 -mfpmath=sse -O2 -funroll-loops -fschedule-insns
I see a whole-program performance jump from 0.047 iter/s to 0.156 iter/s (331% boost). That is huge! Given the fundamental properties of this algorithm (the image processing algorithm most often recommended to be moved to a GPU) the world would be a better place if this performance was the normal case.
-m64 -mtune=generic -march=x86-64 -mfpmath=sse -O2 -fschedule-insns
I see 0.101 iter/s
These must not be included in -O3 since
-m64 -mtune=generic -march=x86-64 -mfpmath=sse -O3
produces only 0.048 iter/s
Testing shows that using
-m64 -march=native -O2 -mfpmath=sse -frename-registers
is sufficient to restore good performance.
Is there a way that I can selectively apply the -frename-registers fix to functions which benefit from it in order to work around the bug until the fix is widely available? I tried
#pragma GCC optimize ("O3,rename-registers")
#pragma GCC optimize ("rename-registers")
as well as the function attribute equivalent and there was no effect. GCC seems to ignore the request.
I did find another somewhat similar function which benefited significantly from -frename-registers.
I discovered that GCC's __attribute__((__optimize__())) and optimization pragmas do not work for OpenMP code because OpenMP uses a different function name for the actual working code. This makes it much more painful to work around this bug.
Is the bug related with PR19780?