This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
New size optimization
- To: wilson at cygnus dot com
- Subject: New size optimization
- From: Joern Rennecke <amylaar at cygnus dot co dot uk>
- Date: Mon, 30 Mar 1998 23:50:45 +0100 (BST)
- Cc: egcs at cygnus dot com
Invariant code motion in general adds extra moves and register pressure.
AFAIK there are no common scenarios where it can be a precursor for
another optimization that leads to a code size saving. Thus, I think it
should be turned off when optimizing for space.
I found this while trying to improve giv finding on the SH by correcting
RTX costs, and found that some other code increased significantly in size:
it went from 949 bytes to 997.
However, with the patch below, it went down to 893 bytes.
Mon Mar 30 23:43:58 1998 J"orn Rennecke <amylaar@cygnus.co.uk>
* loop.c (scan_loop): Don't call move_moveables for optimize_size.
Index: loop.c
===================================================================
RCS file: /cvs/cvsfiles/devo/gcc/loop.c,v
retrieving revision 1.125
diff -p -r1.125 loop.c
*** loop.c 1998/03/25 18:34:32 1.125
--- loop.c 1998/03/30 22:44:19
*************** scan_loop (loop_start, end, nregs, unrol
*** 1052,1059 ****
/* Now consider each movable insn to decide whether it is worth moving.
Store 0 in n_times_set for each reg that is moved. */
! move_movables (movables, threshold,
! insn_count, loop_start, end, nregs);
/* Now candidates that still are negative are those not moved.
Change n_times_set to indicate that those are not actually invariant. */
--- 1052,1060 ----
/* Now consider each movable insn to decide whether it is worth moving.
Store 0 in n_times_set for each reg that is moved. */
! if (! optimize_size)
! move_movables (movables, threshold,
! insn_count, loop_start, end, nregs);
/* Now candidates that still are negative are those not moved.
Change n_times_set to indicate that those are not actually invariant. */