This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH] Use RPO order for fwprop iteration
- From: Robin Dapp <rdapp at linux dot vnet dot ibm dot com>
- To: Richard Biener <rguenther at suse dot de>
- Cc: GCC Patches <gcc-patches at gcc dot gnu dot org>
- Date: Fri, 2 Sep 2016 11:28:03 +0200
- Subject: Re: [PATCH] Use RPO order for fwprop iteration
- Authentication-results: sourceware.org; auth=none
- References: <alpine.LSU.2.11.1608221023530.26629@t29.fhfr.qr>
This causes a performance regression in the xalancbmk SPECint2006
benchmark on s390x. At first sight, the produced asm output doesn't look
too different but I'll have a closer look. Is the fwprop order supposed
to have major performance implications?
Regards
Robin
> This changes it from PRE on the inverted graph to RPO order which works
> better for loops and blocks with no path to exit.
>
> Bootstrapped and tested on x86_64-unknown-linux-gnu, applied.
>
> Richard.
>
> 2016-08-22 Richard Biener <rguenther@suse.de>
>
> * tree-ssa-forwprop.c (pass_forwprop::execute): Use RPO order.
>
> Index: gcc/tree-ssa-forwprop.c
> ===================================================================
> --- gcc/tree-ssa-forwprop.c (revision 239607)
> +++ gcc/tree-ssa-forwprop.c (working copy)
> @@ -2099,7 +2099,8 @@ pass_forwprop::execute (function *fun)
> lattice.create (num_ssa_names);
> lattice.quick_grow_cleared (num_ssa_names);
> int *postorder = XNEWVEC (int, n_basic_blocks_for_fn (fun));
> - int postorder_num = inverted_post_order_compute (postorder);
> + int postorder_num = pre_and_rev_post_order_compute_fn (cfun, NULL,
> + postorder, false);
> auto_vec<gimple *, 4> to_fixup;
> to_purge = BITMAP_ALLOC (NULL);
> for (int i = 0; i < postorder_num; ++i)
>