Bug 42505 - [4.4/4.5/4.6 Regression] loop canonicalization causes a lot of unnecessary temporary variables
Summary: [4.4/4.5/4.6 Regression] loop canonicalization causes a lot of unnecessary te...
Status: RESOLVED FIXED
Alias: None
Product: gcc
Classification: Unclassified
Component: middle-end (show other bugs)
Version: 4.4.0
: P2 normal
Target Milestone: 4.6.0
Assignee: Not yet assigned to anyone
URL:
Keywords: missed-optimization
Depends on:
Blocks: 39839
  Show dependency treegraph
 
Reported: 2009-12-25 18:45 UTC by Shih-wei Liao
Modified: 2011-01-13 16:47 UTC (History)
7 users (show)

See Also:
Host: i686-linux
Target: arm-eabi
Build: i686-linux
Known to work: 4.2.1, 4.6.0
Known to fail: 4.4.0, 4.5.0
Last reconfirmed: 2010-01-04 23:44:35


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Shih-wei Liao 2009-12-25 18:45:16 UTC
This regression was caused by loop canonicalization.

The following example:

struct A {
 int f1;
 int f2;
};

int func(int c);

int test(struct A* src, struct A* dst, int count)
{
  while (count--) {
    if (!func(src->f2)) {
        return 0;
      }
      *dst++ = *src++;
  }

  return 1;
}

gcc 4.2.1 compiles this to 40 bytes, gcc 4.4.0 to 48 bytes:

gcc 4.2.1 output:
test:
      push    {r4, r5, r6, lr}
      mov     r4, r0
      mov     r5, r1
      mov     r6, r2
      b       .L2
.L3:
      ldr     r0, [r4, #4]
      bl      func
      cmp     r0, #0
      beq     .L6
      mov     r3, r5
      mov     r2, r4
      ldmia   r2!, {r0, r1}
      stmia   r3!, {r0, r1}
      mov     r5, r3
      mov     r4, r2
.L2:
      sub     r6, r6, #1
      bcs     .L3
      mov     r0, #1
.L6:
      @ sp needed for prologue
      pop     {r4, r5, r6, pc}

gcc 4.4.0 output:
      push    {r4, r5, r6, r7, lr}    // note r7 is cloberred
      sub     sp, sp, #12         // why need to store smth on the stack?
      mov     r7, r0
      str     r1, [sp, #4]          // why store r1 onto stack?
      mov     r6, r2
      mov     r5, #0
      b       .L2
.L5:
      add     r4, r7, r5
      ldr     r0, [r4, #4]
      bl      func
      sub     r6, r6, #1
      cmp     r0, #0
      beq     .L4
      ldr     r1, [sp, #4]   // load from stack
      add     r3, r1, r5
      add     r5, r5, #8
      ldmia   r4!, {r1, r2}
      stmia   r3!, {r1, r2}
.L2:
      cmp     r6, #0
      bne     .L5
      mov     r0, #1
.L4:
      add     sp, sp, #12
      @ sp needed for prologue
      pop     {r4, r5, r6, r7, pc}

This is caused by loop canonicalization pass (pass_iv_optimize) that was added in gcc 4.4.
Final GIMPLE form in gcc 4.2.1 compiler:

test (src, dst, count)
{
 int a;
 int D.1545;

<bb 2>:
 goto <bb 6> (<L3>);

<L0>:;
 a = func (MEM[base: src, offset: 4]);
 if (a == 0) goto <L8>; else goto <L2>;

<L8>:;
 D.1545 = 0;
 goto <bb 8> (<L5>);

<L2>:;
 MEM[base: dst] = MEM[base: src];
 dst = dst + 8B;
 src = src + 8B;

<L3>:;
 count = count - 1;
 if (count != -1) goto <L0>; else goto <L9>;

<L9>:;
 D.1545 = 1;

<L5>:;
 return D.1545;
}

The final GIMPLE in gcc 4.4:

test (struct A * src, struct A * dst, int count)
{
 unsigned int ivtmp.22; // induction variables introduced by pass_iv_optimize
 unsigned int ivtmp.19;
 int a;
 int D.1274;

<bb 2>:
 ivtmp.22 = (unsigned int) count;  // copy of count, count itself is not used anymore
 ivtmp.19 = 0;
 goto <bb 6>;

<bb 3>:
 a = func (MEM[base: src + ivtmp.19, offset: 4]);
 ivtmp.22 = ivtmp.22 - 1;
 if (a == 0)
   goto <bb 4>;
 else
   goto <bb 5>;

<bb 4>:
 D.1274 = 0;
 goto <bb 8>;

<bb 5>:
 MEM[base: dst, index: ivtmp.19] = MEM[base: src, index: ivtmp.19];
 ivtmp.19 = ivtmp.19 + 8;

<bb 6>:
 if (ivtmp.22 != 0)
   goto <bb 3>;
 else
   goto <bb 7>;

<bb 7>:
 D.1274 = 1;

<bb 8>:
 return D.1274;
}

The following RTL passes could not optimize these temporary induction variables and they are spilled on the stack, which causes a lot of other inefficiencies.

The main question: there are three way to fix this:
1) turn off loop canonicalization for -Os
2) optimize the extra variable in the GIMPLE passes
3) optimize the extra variable in the RTL passes
Comment 1 Ramana Radhakrishnan 2010-01-04 23:44:35 UTC
For completeness the options are with -mthumb -Os -march=armv5te ?  


With trunk I see a size of 52 bytes and this code.

        .type   test, %function
test:
        push    {r4, r5, r6, r7, lr}
        sub     sp, sp, #12
        mov     r7, r0
        str     r1, [sp, #4]
        mov     r6, r2
        mov     r5, #0
        b       .L2
.L4:
        add     r4, r7, r5
        ldr     r0, [r4, #4]
        bl      func
        sub     r6, r6, #1
        cmp     r0, #0
        beq     .L5
        ldr     r1, [sp, #4]
        add     r3, r1, r5
        ldmia   r4!, {r1, r2}
        stmia   r3!, {r1, r2}
        add     r5, r5, #8
.L2:
        cmp     r6, #0
        bne     .L4
        mov     r0, #1
        b       .L3
.L5:
        mov     r0, #0
.L3:
        add     sp, sp, #12
        @ sp needed for prologue
        pop     {r4, r5, r6, r7, pc}
        .size   test, .-test








Comment 2 Shih-wei Liao 2010-01-07 11:31:44 UTC
1. Yes, the flags used are "-mthumb -Os -march=armv5te".

2. For completeness, I also tried to generate 32-bit instructions. In this case of ARM mode, GCC 4.5.0 (trunk as of last week) didn't store things onto stack unnecessarily. I.e., there is no more "sub sp, sp, #12" instruction. See below:

00000000 <test>:
   0:   e92d41f0        push    {r4, r5, r6, r7, r8, lr}
   4:   e1a05000        mov     r5, r0
   8:   e1a04001        mov     r4, r1
   c:   e1a07002        mov     r7, r2
  10:   e3a06000        mov     r6, #0
  14:   ea000009        b       40 <test+0x40>
  18:   e0858006        add     r8, r5, r6
  1c:   e5980004        ldr     r0, [r8, #4]
  20:   ebfffffe        bl      0 <func>
  24:   e3500000        cmp     r0, #0
  28:   e2477001        sub     r7, r7, #1
  2c:   0a000006        beq     4c <test+0x4c>
  30:   e8980003        ldm     r8, {r0, r1}
  34:   e0843006        add     r3, r4, r6
  38:   e8830003        stm     r3, {r0, r1}
  3c:   e2866008        add     r6, r6, #8
  40:   e3570000        cmp     r7, #0
  44:   1afffff3        bne     18 <test+0x18>
  48:   e3a00001        mov     r0, #1
  4c:   e8bd41f0        pop     {r4, r5, r6, r7, r8, lr}
  50:   e12fff1e        bx      lr
Comment 3 Richard Earnshaw 2010-01-07 11:45:25 UTC
(In reply to comment #2)
> 1. Yes, the flags used are "-mthumb -Os -march=armv5te".

>   4c:   e8bd41f0        pop     {r4, r5, r6, r7, r8, lr}
>   50:   e12fff1e        bx      lr
> 

This looks more like a return sequence for v4t than v5te -- why isn't the PC popped directly?
Comment 4 Sandra Loosemore 2010-06-04 00:09:14 UTC
I've been looking at this problem today.  Here's the stupid part coming out of ivopts:

<bb 5>:
  # ivtmp.7_21 = PHI <0(2), ivtmp.7_20(4)>
  # ivtmp.10_22 = PHI <ivtmp.10_24(2), ivtmp.10_23(4)>
  count_25 = (int) ivtmp.10_22;
  if (count_25 != 0)
    goto <bb 3>;
  else
    goto <bb 6>;

No subsequent pass is recognizing that the unsigned-to-signed conversion is useless and "count" is otherwise dead.  

If I change the parameter "count" to have type "unsigned int", then ivopts does the obvious replacement itself:

<bb 5>:
  # ivtmp.7_21 = PHI <0(2), ivtmp.7_20(4)>
  # ivtmp.10_22 = PHI <count_7(D)(2), ivtmp.10_23(4)>
  if (ivtmp.10_22 != 0)
    goto <bb 3>;
  else
    goto <bb 6>;

Then "count" is completely gone from the loop after ivopts and the resulting code looks good.

So, fix this somewhere inside ivopts to make the signed case produce the same code as the unsigned one?  Or tell it not to replace count at all if it has to do a type conversion?  I'm still trying to find my way around the code for this pass to figure out where things happen, so if this is obvious to someone else I'd appreciate a pointer.  :-)
Comment 5 Steven Bosscher 2010-06-04 07:45:30 UTC
AFAIU, you can't randomly change signed to unsigned, due to different overflow semantics, which is why IVOPTS doesn't make this change itself. Imagine you enter the loop with count = 0, and with a second counter hidden in func. You will not get the same number of iterations if you change the type of count from "int" to "unsigned int".
Comment 6 Richard Biener 2010-06-04 09:08:59 UTC
If the result of the conversion is only used in an exit equality test against a
constant it can be dropped.  This could also happen in a following
forwprop run which is our single tree-combiner (though that currently will
combine into comparisons only if the result will be a constant, it doesn't
treat defs with a single use specially which it could, if the combined
constant is in gimple form).
Comment 7 Sandra Loosemore 2010-06-05 20:41:58 UTC
OK, I'm testing a hack to rewrite_use_compare to make it know that it doesn't have to introduce a temporary just to compare against constant zero.  I'm also doing a little tuning of the costs model for -Os, using CSiBE.
Comment 8 Sandra Loosemore 2010-06-10 13:01:21 UTC
I was barking up the wrong tree with my last idea -- the signed/unsigned conversion business was a red herring.  Here's what I now believe is the problem: the costs computation is underestimating the register pressure costs so that we are in fact spilling when the cost computation thinks it still has "free" registers.

A hack to make get_computation_cost_at add target_reg_cost to the result when it must use a scratch register seemed to have positive overall effects on code size (as well as fixing the test case).  But, I don't think that's the real solution, as I can't come up with a good logical justification for putting such a cost there.  :-)  estimate_reg_pressure_cost already reserves 3 "free" registers for such things.  Anyway, I am continuing to poke at this in hopes of figuring out where the register costs model is really going wrong.
Comment 9 Sandra Loosemore 2010-06-12 07:42:04 UTC
I now have a specific theory of what is going on here.  There are two problems:

(1) estimate_reg_pressure_cost is not accounting for the function call in the loop body.  In this case it ought to use call_used_regs instead of fixed_regs to determine how many registers are available for loop invariants.  Here the target is Thumb-1 and there are only 4 non-call-clobbered registers available rather than 9, so we are much more constrained than ivopts thinks we are.  This is pretty straightforward to fix.

(2) For the test case filed with the issue, there are 4 registers needed for the two candidates and two invariants ivopts is selecting, so even with the fix for (1) ivopts thinks it has enough registers available.  But, there are two uses of the form (src + offset) in the ivopts output, although they appear differently in the gimple code.  RTL optimizations are combining these and allocating a temporary.  Since the two uses span the function call in the loop body, the temporary needs to be assigned to a non-call-clobbered register.  This is why there is a spill of the other loop invariant.  Perhaps we could make the RA smarter about recomputing the src + offset value rather than resort to spilling something, but since I am dumb about the RA ;-) I'm planning to keep poking at the ivopts cost model instead.
Comment 10 Sandra Loosemore 2010-06-19 12:56:58 UTC
Patch posted here:

http://gcc.gnu.org/ml/gcc-patches/2010-06/msg01920.html
Comment 11 sandra 2010-07-05 17:41:13 UTC
Subject: Bug 42505

Author: sandra
Date: Mon Jul  5 17:40:57 2010
New Revision: 161844

URL: http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=161844
Log:
2010-07-05  Sandra Loosemore  <sandra@codesourcery.com>

	PR middle-end/42505

	gcc/
	* tree-ssa-loop-ivopts.c (determine_set_costs): Delete obsolete
	comments about cost model.
	(try_add_cand_for):  Add second strategy for choosing initial set
	based on original IVs, controlled by ORIGINALP argument.
	(get_initial_solution): Add ORIGINALP argument.
	(find_optimal_iv_set_1): New function, split from find_optimal_iv_set.
	(find_optimal_iv_set): Try two different strategies for choosing
	the IV set, and return the one with lower cost.

	gcc/testsuite/
	* gcc.target/arm/pr42505.c: New test case.

Added:
    trunk/gcc/testsuite/gcc.target/arm/pr42505.c
Modified:
    trunk/gcc/ChangeLog
    trunk/gcc/testsuite/ChangeLog
    trunk/gcc/tree-ssa-loop-ivopts.c

Comment 12 sandra 2010-07-10 18:43:42 UTC
Subject: Bug 42505

Author: sandra
Date: Sat Jul 10 18:43:29 2010
New Revision: 162043

URL: http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=162043
Log:
2010-07-10  Sandra Loosemore  <sandra@codesourcery.com>

	PR middle-end/42505

	gcc/
	* tree-inline.c (estimate_num_insns): Refactor builtin complexity
	lookup code into....
	* builtins.c (is_simple_builtin, is_inexpensive_builtin): ...these
	new functions.
	* tree.h (is_simple_builtin, is_inexpensive_builtin): Declare.
	* cfgloopanal.c (target_clobbered_regs): Define.
	(init_set_costs): Initialize target_clobbered_regs.
	(estimate_reg_pressure_cost): Add call_p argument.  When true,
	adjust the number of available registers to exclude the
	call-clobbered registers.
	* cfgloop.h (target_clobbered_regs): Declare.
	(estimate_reg_pressure_cost): Adjust declaration.
	* tree-ssa-loop-ivopts.c (struct ivopts_data): Add body_includes_call.
	(ivopts_global_cost_for_size): Pass it to estimate_reg_pressure_cost.
	(determine_set_costs): Dump target_clobbered_regs.
	(loop_body_includes_call): New function.
	(tree_ssa_iv_optimize_loop): Use it to initialize new field.
	* loop-invariant.c (gain_for_invariant): Adjust arguments to pass
	call_p flag through.
	(best_gain_for_invariant): Likewise.
	(find_invariants_to_move): Likewise.
	(move_single_loop_invariants): Likewise, using already-computed
	has_call field.

Modified:
    trunk/gcc/ChangeLog
    trunk/gcc/builtins.c
    trunk/gcc/cfgloop.h
    trunk/gcc/cfgloopanal.c
    trunk/gcc/loop-invariant.c
    trunk/gcc/tree-inline.c
    trunk/gcc/tree-ssa-loop-ivopts.c
    trunk/gcc/tree.h

Comment 13 Sandra Loosemore 2010-10-01 15:01:08 UTC
I think this bug is fixed now.
Comment 14 Jeffrey A. Law 2011-01-13 15:45:22 UTC
Fixed long ago.
Comment 15 Richard Biener 2011-01-13 16:47:22 UTC
For 4.6.  Nothing to backport here.