> (define_constraint "K"
> "Positive integer."
> (and (match_code "const_int")
> (match_test "ival > 0")))
>
> (define_constraint "L"
> "Negative integer."
> (and (match_code "const_int")
> (match_test "ival < 0")))
...
> (define_insn "adddi3"
> [(set (match_operand:DI 0 "register_operand" "=d,d,d")
> (plus:DI (match_operand:DI 1 "register_operand" "%d,d,d")
> (match_operand:DI 2 "arith_operand" "d,K,JL")))]
> ""
> "@
> add\t%L0,%L1,%L2\;addc\t%M0,%M1,%M2
> addi\t%L0,%L1,%2\;addc\t%M0,%M1,r0
> addi\t%L0,%L1,%2\;addc\t%M0,%M1,r0\;addi\t%M0,%M0,-1"
> [(set_attr "type" "darith")
> (set_attr "mode" "DI")
> (set_attr "length" "8,8,12")])
These constraints do not operate properly. You've assumed that K is a
positive 32-bit number, whereas there's nothing to prevent it from being
a positive 64-bit number -- you'd be throwing away the high 32-bits.
Similarly with L.
Frankly, I'm surprised you split the integers here. Why not just
(define_insn "adddi3"
[(set (match_operand:DI 0 "register_operand" "=d,d")
(plus:DI (match_operand:DI 1 "register_operand" "%d,d")
(match_operand:DI 2 "arith_operand" "d,n")))]
""
"@
add\t%L0,%L1,%L2\;addc\t%M0,%M1,%M2
addi\t%L0,%L1,%L2\;addic\t%M0,%M1,%M2"
and have %L and %M pull out the low and high 32-bits from a const_int or
const_double. Note that on a 64-bit host, you will never see a
const_double.
You can then ditch the K and L constraints entirely.
(Note "n" vs "i" -- "n" implies a numeric constant, whereas "i" includes
symbolic constants. Although you shouldn't have seen symbolic
double-word constants in the first place, it doesn't hurt to use the
More Correct constraint anyway.)
> (define_insn "subsi3"
> [(set (match_operand:SI 0 "register_operand" "=d,d,d,d,d")
> (minus:SI (match_operand:SI 1 "arith_operand" "dJ,dJ,dJ,I,i")
> (match_operand:SI 2 "arith_operand" "d,I,i,dJ,dJ")))]
...
> (define_insn "subdi3"
> [(set (match_operand:DI 0 "register_operand" "=d,d,d,d")
> (minus:DI (match_operand:DI 1 "register_operand" "d,d,d,d")
> (match_operand:DI 2 "arith_operand" "d,P,J,N")))]
You can ditch the immediate alternatives to operand 2; those should have
been canonicalized to addition. You can ditch the pure zero
alternatives as well, those should have been discarded entirely.
You might do well to write subdi3 as
(define_insn "subdi3"
[(set (match_operand:DI 0 "register_operand" "=d,d")
(minus:DI (match_operand:DI 1 "arith_operand" "d,n")
(match_operand:DI 2 "register_operand" "d,d")))]
""
"@
rsub\t%L0,%L2,%L1\;rsubc\t%M0,%M2,%M1
rsubi\t%L0,%L2,%L1\;rsubic\t%M0,%M2,%M1"
...)
And, really, subsi3 shouldn't be much different.
> (define_insn "*negdi2_internal"
> [(set (match_operand:DI 0 "register_operand" "=d")
> (neg:DI (match_operand:DI 1 "register_operand" "d")))
> (clobber (match_operand:SI 2 "register_operand" "=d"))]
> ""
> "rsub\t%L0,%L1,r0\;rsubc\t%M0,%M1,r0"
Where is operand 2 used? I think you were interested in saving %M1 if
there is overlap between %M1 and %L0, but forgot to do it.
I honestly can't remember if this is something that you have to worry
about. And if it is, you have the same problem with every other
double-word operation.
What might be best in the short-term is to simply mark the outputs on
all of these instructions as early-clobber, and avoid the extra scratch.
That will be good enough until you rewrite the patterns to model the
carry flag and be split before reload, at which point the early clobber
problem goes away.
> (define_insn "sext8"
> [(set (match_operand:SI 0 "register_operand" "=d")
> (sign_extend:SI (match_operand:QI 1 "register_operand" "d")))]
> "(GET_CODE(operands[1]) != MEM)"
Useless test vs MEM.
> (define_insn "extendqidi2"
> (define_insn "extendhidi2"
Delete these. They are *exactly* what you'd get from generic code, i.e.
extension to SImode followed by extension to DImode.
> (define_insn "sext8"
> (define_expand "extendqisi2"
Merge these. I.e. delete the second and rename the first.
> (define_insn "sext16"
> (define_expand "extendhisi2"
Likewise.
> (define_insn "movsi_status"
> (define_insn "*movsi_internal3"
> (define_insn "*movsi_internal2"
Merge these.
> (define_insn "*movsi_internal5_pltop"
> [(set (match_operand:SI 0 "register_operand" "=d,d")
> (match_operand:SI 1 "call_insn_operand" ""))]
> "(register_operand (operands[0], Pmode) &&
> PLT_ADDR_P (operands[1]))"
> {
> gcc_unreachable ();
> }
Huh?
It appears as if some of the places you perform loads and stores, you
forget the %iN to choose the proper load/store instruction based on the
memory address being used. E.g. movhi_internal2 load alternatives; I
remember seeing others that looked suspicious.
(define_expand "indirect_jump"
[(set (pc) (match_operand 0 "register_operand" "d"))]
""
{
rtx dest = operands[0];
if (GET_CODE (dest) != REG || GET_MODE (dest) != Pmode)
operands[0] = copy_to_mode_reg (Pmode, dest);
emit_jump_insn (gen_indirect_jump_internal1 (operands[0]));
DONE;
}
)
Huh? Why do you need all that? Seems to me that
indirect_jump_internal1 can simply be renamed indirect_jump and be done
with it.
(define_expand "tablejump_internal4"
Unused?
;; Normal return.
;; We match any mode for the return address, so that this will work with
;; both 32 bit and 64 bit targets.
Comment copied from MIPS, and it doesn't apply here.
(define_insn "call_value_intern_plt"
[(set (match_operand:VOID 0 "register_operand" "=d")
(call (mem (match_operand:SI 1 "call_insn_plt_operand" ""))
(match_operand:SI 2 "" "i")))
(clobber (match_operand:SI 3 "register_operand" "=d"))
(use (match_operand:SI 4 "register_operand"))]
"flag_pic"
{
register rtx target2=gen_rtx_REG (Pmode,GP_REG_FIRST +
MB_ABI_SUB_RETURN_ADDR_REGNUM);
gen_rtx_CLOBBER (VOIDmode,target2);
return "brlid\tr15,%1\;%#";
What in the world is up with that clobber?
Since you have it available, you might as well use %3 in that
instruction pattern instead of r15 directly.
(define_insn "call_value_intern"
Similarly.
r~