Next: , Previous: Debugging Options, Up: Invoking G77


5.7 Options That Control Optimization

Most Fortran users will want to use no optimization when developing and testing programs, and use -O or -O2 when compiling programs for late-cycle testing and for production use. However, note that certain diagnostics—such as for uninitialized variables—depend on the flow analysis done by -O, i.e. you must use -O or -O2 to get such diagnostics.

The following flags have particular applicability when compiling Fortran programs:

-malign-double
(Intel x86 architecture only.)

Noticeably improves performance of g77 programs making heavy use of REAL(KIND=2) (DOUBLE PRECISION) data on some systems. In particular, systems using Pentium, Pentium Pro, 586, and 686 implementations of the i386 architecture execute programs faster when REAL(KIND=2) (DOUBLE PRECISION) data are aligned on 64-bit boundaries in memory.

This option can, at least, make benchmark results more consistent across various system configurations, versions of the program, and data sets.

Note: The warning in the gcc documentation about this option does not apply, generally speaking, to Fortran code compiled by g77

See Aligned Data, for more information on alignment issues.

Also also note: The negative form of -malign-double is -mno-align-double, not -benign-double.


-ffloat-store
Might help a Fortran program that depends on exact IEEE conformance on some machines, but might slow down a program that doesn't.

This option is effective when the floating-point unit is set to work in IEEE 854 `extended precision'—as it typically is on x86 and m68k GNU systems—rather than IEEE 754 double precision. -ffloat-store tries to remove the extra precision by spilling data from floating-point registers into memory and this typically involves a big performance hit. However, it doesn't affect intermediate results, so that it is only partially effective. `Excess precision' is avoided in code like:

          a = b + c
          d = a * e
     

but not in code like:

                d = (b + c) * e
     

For another, potentially better, way of controlling the precision, see Floating-point precision.


-fforce-mem

-fforce-addr
Might improve optimization of loops.


-fno-inline
Don't compile statement functions inline. Might reduce the size of a program unit—which might be at expense of some speed (though it should compile faster). Note that if you are not optimizing, no functions can be expanded inline.


-ffast-math
Might allow some programs designed to not be too dependent on IEEE behavior for floating-point to run faster, or die trying. Sets -funsafe-math-optimizations, -ffinite-math-only, and -fno-trapping-math.


-funsafe-math-optimizations
Allow optimizations that may be give incorrect results for certain IEEE inputs.


-ffinite-math-only
Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.

This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications.

The default is -fno-finite-math-only.


-fno-trapping-math
Allow the compiler to assume that floating-point arithmetic will not generate traps on any inputs. This is useful, for example, when running a program using IEEE "non-stop" floating-point arithmetic.


-fstrength-reduce
Might make some loops run faster.


-frerun-cse-after-loop

-fexpensive-optimizations

-fdelayed-branch

-fschedule-insns

-fschedule-insns2

-fcaller-saves
Might improve performance on some code.


-funroll-loops
Typically improves performance on code using iterative DO loops by unrolling them and is probably generally appropriate for Fortran, though it is not turned on at any optimization level. Note that outer loop unrolling isn't done specifically; decisions about whether to unroll a loop are made on the basis of its instruction count.

Also, no `loop discovery'1 is done, so only loops written with DO benefit from loop optimizations, including—but not limited to—unrolling. Loops written with IF and GOTO are not currently recognized as such. This option unrolls only iterative DO loops, not DO WHILE loops.


-funroll-all-loops
Probably improves performance on code using DO WHILE loops by unrolling them in addition to iterative DO loops. In the absence of DO WHILE, this option is equivalent to -funroll-loops but possibly slower.
-fno-move-all-movables

-fno-reduce-all-givs

-fno-rerun-loop-opt
In general, the optimizations enabled with these options will lead to faster code being generated by GNU Fortran; hence they are enabled by default when issuing the g77 command.

-fmove-all-movables and -freduce-all-givs will enable loop optimization to move all loop-invariant index computations in nested loops over multi-rank array dummy arguments out of these loops.

-frerun-loop-opt will move offset calculations resulting from the fact that Fortran arrays by default have a lower bound of 1 out of the loops.

These three options are intended to be removed someday, once loop optimization is sufficiently advanced to perform all those transformations without help from these options.

See Options That Control Optimization, for more information on options to optimize the generated machine code.


Footnotes

[1] loop discovery refers to the process by which a compiler, or indeed any reader of a program, determines which portions of the program are more likely to be executed repeatedly as it is being run. Such discovery typically is done early when compiling using optimization techniques, so the “discovered” loops get more attention—and more run-time resources, such as registers—from the compiler. It is easy to “discover” loops that are constructed out of looping constructs in the language (such as Fortran's DO). For some programs, “discovering” loops constructed out of lower-level constructs (such as IF and GOTO) can lead to generation of more optimal code than otherwise.