This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
[PATCH] ARM pre-indexing adress mode for absolute addresses
- From: Nicolas Pitre <nico at cam dot org>
- To: gcc-patches at gcc dot gnu dot org
- Cc: Richard Earnshaw <rearnsha at arm dot com>
- Date: Tue, 30 Aug 2005 16:18:07 -0400 (EDT)
- Subject: [PATCH] ARM pre-indexing adress mode for absolute addresses
A while ago I submitted a patch:
http://gcc.gnu.org/ml/gcc-patches/2004-11/msg01195.html
It then was approved for 4.1:
http://gcc.gnu.org/ml/gcc-patches/2004-12/msg00554.html
I somehow got burried away in the mean time before the 4.1 branch
started. I'm therefore asking for confirmation that I still can commit
this patch now? Thanks.
[date] Nicolas Pitre <nico@cam.org>
* config/arm/arm.c (arm_legitimize_address): Split absolute addresses
to alow matching pre-indexed addressing mode.
(arm_override_options): Remove now irrelevant comment.
Index: config/arm/arm.c
===================================================================
RCS file: /cvs/gcc/gcc/gcc/config/arm/arm.c,v
retrieving revision 1.476
diff -u -r1.476 arm.c
--- config/arm/arm.c 20 Aug 2005 10:31:40 -0000 1.476
+++ config/arm/arm.c 30 Aug 2005 20:16:59 -0000
@@ -1234,10 +1234,6 @@
if (optimize_size)
{
- /* There's some dispute as to whether this should be 1 or 2. However,
- experiments seem to show that in pathological cases a setting of
- 1 degrades less severely than a setting of 2. This could change if
- other parts of the compiler change their behavior. */
arm_constant_limit = 1;
/* If optimizing for size, bump the number of instructions that we
@@ -3759,6 +3755,34 @@
x = gen_rtx_MINUS (SImode, xop0, xop1);
}
+ /* Make sure to take full advantage of the pre-indexed addressing mode
+ with absolute addresses which often allows for the base register to
+ be factorized for multiple adjacent memory references, and it might
+ even allows for the mini pool to be avoided entirely. */
+ else if (GET_CODE (x) == CONST_INT)
+ {
+ unsigned int shift;
+ HOST_WIDE_INT mask, base, index;
+ rtx base_reg;
+
+ /* ldr and ldrb can use a 12 bit index, ldrsb and the rest can only
+ use a 8 bit index. So let's use a 12 bit index for SImode only and
+ hope that arm_gen_constant will enable ldrb to use more bits. */
+ shift = (mode == SImode) ? 12 : 8;
+ mask = (1 << shift) - 1;
+ base = INTVAL (x) & ~mask;
+ index = INTVAL (x) & mask;
+ if (count_bits (base) > (32 - shift)/2)
+ {
+ /* It'll most probably be more efficient to generate the base
+ with more bits set and use a negative index instead. */
+ base |= mask;
+ index -= mask;
+ }
+ base_reg = force_reg (SImode, GEN_INT (base));
+ x = gen_rtx_PLUS (SImode, base_reg, GEN_INT (index));
+ }
+
if (flag_pic)
{
/* We need to find and carefully transform any SYMBOL and LABEL