Don't bother doing any performance analysis until most of the following items are taken care of, because there's no question they represent serious space/time problems, although some of them show up only given certain kinds of (popular) input.
mallocpackage and its uses to specify more info about memory pools and, where feasible, use obstacks to implement them.
EQUIVALENCEareas) so zeros need not be output. This would reduce memory usage for large initialized aggregate areas, even ones with only one initialized element.
As of version 0.5.18, a portion of this item has already been accomplished.
sta.c) so that the nature of the statement is determined as much as possible by looking entirely at its form, and not looking at any context (previous statements, including types of symbols). This would allow ripping out of the statement-confirmation, symbol retraction/confirmation, and diagnostic inhibition mechanisms. Plus, it would result in much-improved diagnostics. For example,
CALL some-intrinsic(...), where the intrinsic is not a subroutine intrinsic, would result actual error instead of the unimplemented-statement catch-all.
g77, don't pass line/column pairs where a simple
ffewheretype, which points to the error as much as is desired by the configuration, will do, and don't pass
ffelexTokentypes where a simple
ffewheretype will do. Then, allow new default configuration of
ffewheresuch that the source line text is not preserved, and leave it to things like Emacs' next-error function to point to them (now that
next-errorsupports column, or, perhaps, character-offset, numbers). The change in calling sequences should improve performance somewhat, as should not having to save source lines. (Whether this whole item will improve performance is questionable, but it should improve maintainability.)
DATA (A(I),I=1,1000000)/1000000*2/more efficiently, especially as regards the assembly output. Some of this might require improving the back end, but lots of improvement in space/time required in
g77itself can be fairly easily obtained without touching the back end. Maybe type-conversion, where necessary, can be speeded up as well in cases like the one shown (converting the
lex.cto not need any feedback during tokenization, by keeping track of enough parse state on its own.