This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH] sched-rgn: run add_branch_dependencies for sel-sched (PR 84301)
- From: Andrey Belevantsev <abel at ispras dot ru>
- To: Alexander Monakov <amonakov at ispras dot ru>, gcc-patches at gcc dot gnu dot org
- Cc: "Vladimir N. Makarov" <vmakarov at redhat dot com>
- Date: Wed, 11 Apr 2018 13:15:54 +0300
- Subject: Re: [PATCH] sched-rgn: run add_branch_dependencies for sel-sched (PR 84301)
- Openpgp: preference=signencrypt
- References: <alpine.LNX.2.20.13.1804101356370.24851@monopod.intra.ispras.ru>
On 10.04.2018 14:09, Alexander Monakov wrote:
> Hi,
>
> The add_branch_dependencies function is fairly unusual in that it creates
> dependence edges "out of thin air" for all sorts of instructions preceding
> BB end. I think that is really unfortunate (explicit barriers in RTL would
> be more natural), but I've already complained about that in the PR.
>
> The bug/regression is that this function was not run for sel-sched, but the
> testcase uncovers that moving a USE away from the return insn can break
> assumptions in mode-switching code.
>
> Solve this by running the first part of add_branch_dependencies where it
> sets CANT_MOVE flags on immovable non-branch insns.
>
> Bootstrapped/regtested on x86_64 with sel-sched active. OK to apply?
Looks fine to me but I cannot approve -- maybe Vladimir can take a look?
Andrey
>
> Alexander
>
>
> PR target/84301
> * sched-rgn.c (add_branch_dependences): Move sel_sched_p check here...
> (compute_block_dependences): ... from here.
>
> * gcc.target/i386/pr84301.c: New test.
>
> diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
> index 8c3a740b70e..3c67fccb9b1 100644
> --- a/gcc/sched-rgn.c
> +++ b/gcc/sched-rgn.c
> @@ -2497,6 +2497,11 @@ add_branch_dependences (rtx_insn *head, rtx_insn *tail)
> while (insn != head && DEBUG_INSN_P (insn));
> }
>
> + /* Selective scheduling handles control dependencies by itself, and
> + CANT_MOVE flags ensure that other insns will be kept in place. */
> + if (sel_sched_p ())
> + return;
> +
> /* Make sure these insns are scheduled last in their block. */
> insn = last;
> if (insn != 0)
> @@ -2725,9 +2730,7 @@ compute_block_dependences (int bb)
>
> sched_analyze (&tmp_deps, head, tail);
>
> - /* Selective scheduling handles control dependencies by itself. */
> - if (!sel_sched_p ())
> - add_branch_dependences (head, tail);
> + add_branch_dependences (head, tail);
>
> if (current_nr_blocks > 1)
> propagate_deps (bb, &tmp_deps);
> diff --git a/gcc/testsuite/gcc.target/i386/pr84301.c b/gcc/testsuite/gcc.target/i386/pr84301.c
> index e69de29bb2d..f1708b8ea6c 100644
> --- a/gcc/testsuite/gcc.target/i386/pr84301.c
> +++ b/gcc/testsuite/gcc.target/i386/pr84301.c
> @@ -0,0 +1,15 @@
> +/* PR target/84301 */
> +/* { dg-do compile } */
> +/* { dg-options "-march=bdver1 -O1 -fexpensive-optimizations -fschedule-insns -fselective-scheduling -fno-dce -fno-tree-dce --param max-pending-list-length=0 --param selsched-max-lookahead=2" } */
> +
> +int lr;
> +long int xl;
> +
> +int
> +v4 (void)
> +{
> + int mp;
> +
> + ++xl;
> + mp = (lr - xl) > 1;
> +}
>