sched-deps speedup
Richard Earnshaw
rearnsha@arm.com
Thu Jan 11 05:27:00 GMT 2001
> On Wed, Jan 10, 2001 at 05:44:27PM -0500, Rod Stewart wrote:
> > From what Jeff said this may be fixed, although I don't see anything from
> > him in gcc/Changelog which appears relevant from my limited understanding.
>
> Jeff was thinking of his instantiate_virtual_regs change.
>
> But no, the arm port has problems with its block move patterns.
> This is the instruction that aborted:
>
> (insn 453 1953 1948 (parallel[
> (set (reg:SI 0 r0)
> (mem/s:SI (reg:SI 12 ip) 0))
> (set (reg:SI 1 r1)
> (mem/s:SI (plus:SI (reg/f:SI 95)
> (const_int 4 [0x4])) 0))
> (set (reg:SI 2 r2)
> (mem/s:SI (plus:SI (reg/f:SI 95)
> (const_int 8 [0x8])) 0))
> (set (reg:SI 3 r3)
> (mem/s:SI (plus:SI (reg/f:SI 95)
> (const_int 12 [0xc])) 0))
> ] ) 202 {*ldmsi} (nil)
> (nil))
>
> I have no idea how the existing ldmsi patterns can be fixed.
> The only options that I can think of involve either having N
> patterns to match, or using a different pattern before reload
> and splitting it afterward.
>
We've always got away with this before, because nothing really cared about
the pseudos that didn't get fixed up during reload. However, it seems
like it is time to fix the md file, even though it means there are yet
more patterns to match now.
Unfortunately, the obvious approach, that is:
(define_insn "*ldmsi4"
[(match_parallel 0 "load_multiple_operation"
[(set (match_operand:SI 2 "arm_hard_register_operand" "")
(mem:SI (match_operand:SI 1 "s_register_operand" "r")))
(set (match_operand:SI 3 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 4))))
(set (match_operand:SI 4 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 8))))
(set (match_operand:SI 5 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 12))))])]
"TARGET_ARM && XVECLEN (operands[0], 0) == 4"
"ldm%?ia\\t%1, {%2, %3, %4, %5}"
)
(define_insn "*ldmsi3"
[(match_parallel 0 "load_multiple_operation"
[(set (match_operand:SI 2 "arm_hard_register_operand" "")
(mem:SI (match_operand:SI 1 "s_register_operand" "r")))
(set (match_operand:SI 3 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 4))))
(set (match_operand:SI 4 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 8))))])]
"TARGET_ARM && XVECLEN (operands[0], 0) == 3"
"ldm%?ia\\t%1, {%2, %3, %4}"
)
(define_insn "*ldmsi2"
[(match_parallel 0 "load_multiple_operation"
[(set (match_operand:SI 2 "arm_hard_register_operand" "")
(mem:SI (match_operand:SI 1 "s_register_operand" "r")))
(set (match_operand:SI 3 "arm_hard_register_operand" "")
(mem:SI (plus:SI (match_dup 1) (const_int 4))))])]
"TARGET_ARM && XVECLEN (operands[0], 0) == 2"
"ldm%?ia\\t%1, {%2, %3}"
)
causes genrecog to generate a bad insn-recog file -- we get switch
statements with two "case 4:" entries.
L10: ATTRIBUTE_UNUSED_LABEL
x4 = XEXP (x3, 1);
if (GET_CODE (x4) == CONST_INT)
goto L51;
goto ret0;
L51: ATTRIBUTE_UNUSED_LABEL
switch ((int) XWINT (x4, 0))
{
case 4:
goto L11;
case 4:
goto L53;
default:
break;
}
I guess this means that genrecog isn't trying hard enough to fold its
expressions.
R.
PS The above example, if fed into genrecog, will show the problem.
More information about the Gcc-patches
mailing list