This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: RFA: cache enabled attribute by insn code
- From: Richard Sandiford <rdsandiford at googlemail dot com>
- To: Richard Earnshaw <rearnsha at arm dot com>
- Cc: Christophe Lyon <christophe dot lyon at linaro dot org>, Jeff Law <law at redhat dot com>, "gcc-patches\ at gcc dot gnu dot org" <gcc-patches at gcc dot gnu dot org>
- Date: Thu, 29 May 2014 11:45:00 +0100
- Subject: Re: RFA: cache enabled attribute by insn code
- Authentication-results: sourceware.org; auth=none
- References: <87k39gkgq0 dot fsf at talisman dot default> <537B95E9 dot 1010203 at redhat dot com> <87iop0i13e dot fsf at talisman dot default> <537F9126 dot 5080904 at redhat dot com> <CAKdteOaYVgFZg5aUWUj9K7LPgmiGoT3iFiiaVbc-eSUKv22HCQ at mail dot gmail dot com> <87zji31fh7 dot fsf at sandifor-thinkpad dot stglab dot manchester dot uk dot ibm dot com> <87r43f1f1e dot fsf at sandifor-thinkpad dot stglab dot manchester dot uk dot ibm dot com> <87ioor1cr8 dot fsf at sandifor-thinkpad dot stglab dot manchester dot uk dot ibm dot com> <53870126 dot 2080002 at arm dot com>
Richard Earnshaw <rearnsha@arm.com> writes:
> On 27/05/14 16:07, Richard Sandiford wrote:
>> Richard Sandiford <rdsandiford@googlemail.com> writes:
>>> Richard Sandiford <rsandifo@linux.vnet.ibm.com> writes:
>>>> Does the following patch help?
>>>
>>> Bah, it won't of course: %i1 needs to be the operator.
>>
>> Here's v2. I tested that it worked for simple tests like:
>>
>> int f1 (int x, int y) { return x + (y << 4); }
>> int f2 (int x, int y) { return x - (y << 4); }
>> int f3 (int x, int y) { return x & (y << 4); }
>> int f4 (int x, int y) { return x | (y << 4); }
>> int f5 (int x, int y) { return x ^ (y << 4); }
>> int f6 (int x, int y) { return (y << 4) - x; }
>> int g1 (int x, int y, int z) { return x + (y << z); }
>> int g2 (int x, int y, int z) { return x - (y << z); }
>> int g3 (int x, int y, int z) { return x & (y << z); }
>> int g4 (int x, int y, int z) { return x | (y << z); }
>> int g5 (int x, int y, int z) { return x ^ (y << z); }
>> int g6 (int x, int y, int z) { return (y << z) - x; }
>>
>> as well as the testcase.
>>
>> Thanks,
>> Richard
>>
>>
>> gcc/
>> * config/arm/iterators.md (shiftable_ops): New code iterator.
>> (t2_binop0, arith_shift_insn): New code attributes.
>> * config/arm/arm.md (insn_enabled): Delete.
>> (enabled): Remove insn_enabled test.
>> (*arith_shiftsi): Split out...
>> (*arith_multsi): ...this pattern and remove insn_enabled attribute.
>>
>
>
> Thanks, Richard.
> I've tweaked this as followed and committed it.
>
> I now consider "shift_operator" in the arm backend deprecated. We
> should be moving towards using shift_nomul_operator.
>
> There's one final wart still to be handled, though. 'rotate' can only
> take an immediate operand, not a register. We can currently deal with
> this, but it's not clean in terms of constraint handling. I'll see if I
> can fix this up sometime, but not today.
Thanks for picking it up. I realised later that I'd fluffed the
MULT check in:
> @@ -9876,38 +9868,34 @@
>
> ;; Patterns to allow combination of arithmetic, cond code and shifts
>
> -(define_insn "*arith_shiftsi"
> - [(set (match_operand:SI 0 "s_register_operand" "=r,r,r,r")
> - (match_operator:SI 1 "shiftable_operator"
> - [(match_operator:SI 3 "shift_operator"
> - [(match_operand:SI 4 "s_register_operand" "r,r,r,r")
> - (match_operand:SI 5 "shift_amount_operand" "M,M,M,r")])
> - (match_operand:SI 2 "s_register_operand" "rk,rk,r,rk")]))]
> +(define_insn "*<arith_shift_insn>_multsi"
> + [(set (match_operand:SI 0 "s_register_operand" "=r,r")
> + (shiftable_ops:SI
> + (mult:SI (match_operand:SI 2 "s_register_operand" "r,r")
> + (match_operand:SI 3 "power_of_two_operand" ""))
> + (match_operand:SI 1 "s_register_operand" "rk,<t2_binop0>")))]
> "TARGET_32BIT"
> - "%i1%?\\t%0, %2, %4%S3"
> + "<arith_shift_insn>%?\\t%0, %1, %2, lsl %b3"
> + [(set_attr "predicable" "yes")
> + (set_attr "predicable_short_it" "no")
> + (set_attr "shift" "4")
> + (set_attr "arch" "a,t2")
> + (set_attr "type" "alu_shift_imm")])
> +
> +(define_insn "*<arith_shift_insn>_shiftsi"
> + [(set (match_operand:SI 0 "s_register_operand" "=r,r,r")
> + (shiftable_ops:SI
> + (match_operator:SI 2 "shift_nomul_operator"
> + [(match_operand:SI 3 "s_register_operand" "r,r,r")
> + (match_operand:SI 4 "shift_amount_operand" "M,M,r")])
> + (match_operand:SI 1 "s_register_operand" "rk,<t2_binop0>,rk")))]
> + "TARGET_32BIT && GET_CODE (operands[3]) != MULT"
...this condition: operands[3] was the old numbering of the operator
rather than the new numbering. It looks like shift_nomul_operator
should make it redundant anyway.
Richard