This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH 2/2] S/390: Do not end groups after fallthru edge
> Can't we just set s390_sched_state to s390_last_sched_state in
> s390_sched_init.
Good idea, this simplifies the code quite a bit.
> Preserving the sched state across basic blocks for your case works
> only if the BBs are traversed with the fall through edges coming
> first. Is that the case? We probably should have a description for
> s390_last_sched_state stating this.
According to the documentation the basic blocks should be traversed in
"instruction stream" order, i.e. a fallthru edge will be considered
first as far as I understand it.
Regards
Robin
--
gcc/ChangeLog:
2017-10-17 Robin Dapp <rdapp@linux.vnet.ibm.com>
* config/s390/s390.c (s390_bb_fallthru_entry_likely): New
function.
(s390_sched_init): Do not reset s390_sched_state if we entered
the current basic block via a fallthru edge and all others are
very unlikely.
diff --git a/gcc/config/s390/s390.c b/gcc/config/s390/s390.c
index c1a144e..dac81bc 100644
--- a/gcc/config/s390/s390.c
+++ b/gcc/config/s390/s390.c
@@ -83,6 +83,7 @@ along with GCC; see the file COPYING3. If not see
#include "symbol-summary.h"
#include "ipa-prop.h"
#include "ipa-fnsummary.h"
+#include "sched-int.h"
/* This file should be included last. */
#include "target-def.h"
@@ -14346,6 +14347,28 @@ s390_z10_prevent_earlyload_conflicts (rtx_insn **ready, int *nready_p)
ready[0] = tmp;
}
+/* Returns TRUE if BB is entered via a fallthru edge and all other
+ incoming edges are less than unlikely. */
+static bool
+s390_bb_fallthru_entry_likely (basic_block bb)
+{
+ edge e, fallthru_edge;
+ edge_iterator ei;
+
+ if (!bb)
+ return false;
+
+ fallthru_edge = find_fallthru_edge (bb->preds);
+ if (!fallthru_edge)
+ return false;
+
+ FOR_EACH_EDGE (e, ei, bb->preds)
+ if (e != fallthru_edge
+ && e->probability >= profile_probability::unlikely ())
+ return false;
+
+ return true;
+}
/* The s390_sched_state variable tracks the state of the current or
the last instruction group.
@@ -14354,7 +14377,7 @@ s390_z10_prevent_earlyload_conflicts (rtx_insn **ready, int *nready_p)
3 the last group is complete - normal insns
4 the last group was a cracked/expanded insn */
-static int s390_sched_state;
+static int s390_sched_state = 0;
#define S390_SCHED_STATE_NORMAL 3
#define S390_SCHED_STATE_CRACKED 4
@@ -14764,7 +14787,17 @@ s390_sched_init (FILE *file ATTRIBUTE_UNUSED,
{
last_scheduled_insn = NULL;
memset (last_scheduled_unit_distance, 0, MAX_SCHED_UNITS * sizeof (int));
- s390_sched_state = 0;
+
+ /* If the next basic block is most likely entered via a fallthru edge
+ we keep the last sched state. Otherwise we start a new group.
+ The scheduler traverses basic blocks in "instruction stream" ordering
+ so if we see a fallthru edge here, s390_sched_state will be of its
+ source block. */
+ rtx_insn *insn = current_sched_info->prev_head
+ ? NEXT_INSN (current_sched_info->prev_head) : NULL;
+ basic_block bb = insn ? BLOCK_FOR_INSN (insn) : NULL;
+ if (!s390_bb_fallthru_entry_likely (bb))
+ s390_sched_state = 0;
}
/* This target hook implementation for TARGET_LOOP_UNROLL_ADJUST calculates