Most Fortran users will want to use no optimization when developing and testing programs, and use -O or -O2 when compiling programs for late-cycle testing and for production use. However, note that certain diagnostics—such as for uninitialized variables—depend on the flow analysis done by -O, i.e. you must use -O or -O2 to get such diagnostics.
The following flags have particular applicability when compiling Fortran programs:
Noticeably improves performance of g77 programs making
heavy use of
DOUBLE PRECISION) data
on some systems.
In particular, systems using Pentium, Pentium Pro, 586, and
of the i386 architecture execute programs faster when
DOUBLE PRECISION) data are
aligned on 64-bit boundaries
This option can, at least, make benchmark results more consistent across various system configurations, versions of the program, and data sets.
Note: The warning in the gcc documentation about this option does not apply, generally speaking, to Fortran code compiled by g77
See Aligned Data, for more information on alignment issues.
Also also note: The negative form of -malign-double is -mno-align-double, not -benign-double.
This option is effective when the floating-point unit is set to work in IEEE 854 `extended precision'—as it typically is on x86 and m68k GNU systems—rather than IEEE 754 double precision. -ffloat-store tries to remove the extra precision by spilling data from floating-point registers into memory and this typically involves a big performance hit. However, it doesn't affect intermediate results, so that it is only partially effective. `Excess precision' is avoided in code like:
a = b + c d = a * e
but not in code like:
d = (b + c) * e
For another, potentially better, way of controlling the precision, see Floating-point precision.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications.
The default is -fno-finite-math-only.
DOloops by unrolling them and is probably generally appropriate for Fortran, though it is not turned on at any optimization level. Note that outer loop unrolling isn't done specifically; decisions about whether to unroll a loop are made on the basis of its instruction count.
Also, no `loop discovery'1 is done, so only loops written with
benefit from loop optimizations, including—but not limited
to—unrolling. Loops written with
GOTO are not
currently recognized as such. This option unrolls only iterative
DO loops, not
DO WHILE loops.
DO WHILEloops by unrolling them in addition to iterative
DOloops. In the absence of
DO WHILE, this option is equivalent to -funroll-loops but possibly slower.
-fmove-all-movables and -freduce-all-givs will enable loop optimization to move all loop-invariant index computations in nested loops over multi-rank array dummy arguments out of these loops.
-frerun-loop-opt will move offset calculations resulting from the fact that Fortran arrays by default have a lower bound of 1 out of the loops.
These three options are intended to be removed someday, once loop optimization is sufficiently advanced to perform all those transformations without help from these options.
See Options That Control Optimization (Using the GNU Compiler Collection (GCC)), for more information on options to optimize the generated machine code.
 loop discovery refers to the
process by which a compiler, or indeed any reader of a program,
determines which portions of the program are more likely to be executed
repeatedly as it is being run. Such discovery typically is done early
when compiling using optimization techniques, so the “discovered”
loops get more attention—and more run-time resources, such as
registers—from the compiler. It is easy to “discover” loops that are
constructed out of looping constructs in the language
(such as Fortran's
DO). For some programs, “discovering” loops
constructed out of lower-level constructs (such as
GOTO) can lead to generation of more optimal code