effect of -fPIC on code speed

Ian Lance Taylor iant@google.com
Fri Sep 17 05:16:00 GMT 2010

Miles Bader <miles@gnu.org> writes:

> I recently wanted to produce both a standalone executable and a shared
> library using the same set of object files, and so compiled them all
> using -fPIC (as "gcc -shared" demeands it).  I made my shared lib, and
> also made a normal executable using these object files -- and I
> noticed that the resulting executable size was a fair bit smaller
> (according to the "size" command) than the previous non-fPIC
> executable I had compiled (using non-fPIC object files).
> I haven't done any significant amount of benchmarking on it, but it
> didn't seem obviously slower.
> [This is on an x86-64 system, using a g++-snapshot from debian:
> gcc (Debian 20100828-1) 4.6.0 20100828 (experimental) [trunk revision 163616] ]
> Is there any general wisdom about the effects of using -fPIC,
> especially on code speed?  Would it be stupid for me to set up my
> configure script to _always_ use -fPIC (even when not needed for a
> shared library), when it detects that gcc accepts that option?

I did various measurements a few years ago, and for me -fPIC code ran
some 2-3% slower than than non -fPIC code.  It does, of course, depend
on your code.  With -fPIC all uses of global variables require an
additional memory load.  In a shared library, all function calls require
executing an additional instruction from a different cache line, plus an
additional memory load; when you link -fPIC code into an executable,
that particular slow down can often be eliminated at link time.

The -fPIC option inhibits code inlining, which is probably why you are
seeing smaller object files.  If you want smaller object files, you will
normally get better results from -Os than from -fPIC.


More information about the Gcc-help mailing list