This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [x86, PATCH] operand reordering for commutative operations


Hi All,

Thanks a lot for your comments.
I've re-written reorder_operands as you proposed, but I'd like to know
if we should apply this reordering at -O0?

I will re-send the patch after testing completion.
Thanks.
Yuri.

2015-01-09 13:13 GMT+03:00 Richard Biener <richard.guenther@gmail.com>:
> On Mon, Jan 5, 2015 at 9:26 PM, Jeff Law <law@redhat.com> wrote:
>> On 12/29/14 06:30, Yuri Rumyantsev wrote:
>>>
>>> Hi All,
>>>
>>> Here is a patch which fixed several performance degradation after
>>> operand canonicalization (r216728). Very simple approach is used - if
>>> operation is commutative and its second operand required more
>>> operations (statements) for computation, swap operands.
>>> Currently this is done under special option which is set-up to true
>>> only for x86 32-bit targets ( we have not  seen any performance
>>> improvements on 64-bit).
>>>
>>> Is it OK for trunk?
>>>
>>> 2014-12-26  Yuri Rumyantsev  <ysrumyan@gmail.com>
>>>
>>> * cfgexpand.c (count_num_stmt): New function.
>>> (reorder_operands): Likewise.
>>> (expand_gimple_basic_block): Insert call of reorder_operands.
>>> * common.opt(flag_reorder_operands): Add new flag.
>>> * config/i386/i386.c (ix86_option_override_internal): Add setup of
>>> flag_reorder_operands for 32-bit target only.
>>> * (doc/invoke.texi: Add new optimization option -freorder-operands.
>>>
>>> gcc/testsuite/ChangeLog
>>> * gcc.target/i386/swap_opnd.c: New test.
>>
>> I'd do this unconditionally -- I don't think there's a compelling reason to
>> add another flag here.
>
> Indeed.
>
>> Could you use estimate_num_insns rather than rolling your own estimate code
>> here?  All you have to do is setup the weights structure and call the
>> estimation code.  I wouldn't be surprised if ultimately the existing insn
>> estimator is better than the one you're adding.
>
> Just use eni_size_weights.
>
> Your counting is quadratic, that's a no-go.  You'd have to keep a lattice
> of counts for SSA names to avoid this.  There is swap_ssa_operands (),
> in your swapping code you fail to update SSA operands (maybe non-fatal
> because we are just expanding to RTL, but ...).
>
> bb->loop_father is always non-NULL, but doing this everywhere, not only
> in loops looks fine to me.
>
> You can swap comparison operands on GIMPLE_CONDs for all
> codes by also swapping the EDGE_TRUE_VALUE/EDGE_FALSE_VALUE
> flags on the outgoing BB edges.
>
> There are more cases that can be swapped in regular stmts as well,
> but I suppose we don't need to be "complete" here.
>
> So, in reorder_operands I'd do (pseudo-code)
>
>   n = 0;
>   for-all-stmts
>     gimple_set_uid (stmt, n++);
>   lattice = XALLOCVEC (unsigned, n);
>
>   i = 0;
>   for-all-stmts
>     this_stmt_cost = estimate_num_insns (stmt, &eni_size_weights);
>     lattice[i] = this_stmt_cost;
>     FOR_EACH_SSA_USE_OPERAND ()
>        if (use-in-this-BB)
>          lattice[i] += lattice[gimple_uid (SSA_NAME_DEF_STMT)];
>      i++;
>     swap-if-operand-cost says so
>
> Richard.
>
>> Make sure to reference the PR in the ChangeLog.
>>
>> Please update and resubmit.
>>
>> Thanks,
>> Jeff


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]