This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Use predicates for RTL objects


On Fri, Aug 02, 2019 at 01:53:11PM -0500, Segher Boessenkool wrote:
> On Fri, Aug 02, 2019 at 01:55:04PM -0400, Arvind Sankar wrote:
> > On Fri, Aug 02, 2019 at 12:49:56PM -0500, Segher Boessenkool wrote:
> > > On Fri, Aug 02, 2019 at 01:38:21PM -0400, Arvind Sankar wrote:
> > > > Hi, I have taken a crack at the beginner GCC project mentioned at
> > > > https://gcc.gnu.org/projects/beginner.html to replace uses of GET_CODE
> > > > to check rtx_code with the appropriate predicate macros.
> > > > 
> > > > Would someone be able to review/actually apply the changes if they look
> > > > acceptable?
> > > > 
> > > > Most of the change is auto-generated using the enclosed script [1]. In
> > > > addition I have added 3 new predicates to rtl.h: CONST_VECTOR_P,
> > > > CONST_STRING_P and CONST_P. After the autogenned patch there is a small
> > > > cleanup for a couple instances where the existing comparison is split
> > > > across source lines and wasn't picked up by the script.
> > > 
> > > Thank you for doing this!
> > > 
> > > I don't think there should be a CONST_P like this, the name suggests
> > > too much that the macro returns true for constants, which isn't what it
> > > does (not in either direction; neither more or less than it).
> > > 
> > > It's worse than SET_P or PLUS_P would be even, imo.
> > 
> > Yes, I agree CONST_P is a little confusing. There are about 60
> > occurences of GET_CODE (..) being compared to CONST in generic code.
> > Should I just leave it out of the conversion
> 
> That's shat I would do.
> 
> > or perhaps rename it to
> > RTL_CONST_P? Though none of the other macros is namespaced, unfortunately.
> 
> But please get other people's opinion.  Like, people who can actually
> approve your patch in the first place ;-)
> 
> 
> Segher

Thanks. Anyway, attaching a version that leaves out CONST_P.
>From 1bbb0fbaa9a17aaefc1eca86054bbd95719ec691 Mon Sep 17 00:00:00 2001
From: Arvind Sankar <nivedita@alum.mit.edu>
Date: Fri, 2 Aug 2019 14:56:57 -0400
Subject: [PATCH 1/3] Add CONST_VECTOR_P and CONST_STRING_P

---
 gcc/rtl.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/gcc/rtl.h b/gcc/rtl.h
index 039ab05f951..592f141bf22 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -823,10 +823,16 @@ struct GTY(()) rtvec_def {
   (CONST_INT_P (X) || CONST_DOUBLE_AS_INT_P (X))
 #endif
 
-/* Predicate yielding true iff X is an rtx for a double-int.  */
+/* Predicate yielding true iff X is an rtx for a floating point constant.  */
 #define CONST_DOUBLE_AS_FLOAT_P(X) \
   (GET_CODE (X) == CONST_DOUBLE && GET_MODE (X) != VOIDmode)
 
+/* Predicate yielding nonzero iff X is an rtx for a constant vector.  */
+#define CONST_VECTOR_P(X) (GET_CODE (X) == CONST_VECTOR)
+
+/* Predicate yielding nonzero iff X is an rtx for a constant string */
+#define CONST_STRING_P(X) (GET_CODE (X) == CONST_STRING)
+
 /* Predicate yielding nonzero iff X is a label insn.  */
 #define LABEL_P(X) (GET_CODE (X) == CODE_LABEL)
 
-- 
2.21.0

>From 5d76b6d68d50974cf51bb823ffab64c6f2477fa9 Mon Sep 17 00:00:00 2001
From: Arvind Sankar <nivedita@alum.mit.edu>
Date: Fri, 2 Aug 2019 14:59:03 -0400
Subject: [PATCH 2/3] Use rtx_code predicates instead of GET_CODE

---
 gcc/alias.c             |  12 +--
 gcc/asan.c              |   2 +-
 gcc/bb-reorder.c        |   2 +-
 gcc/bt-load.c           |   2 +-
 gcc/builtins.c          |   2 +-
 gcc/caller-save.c       |   4 +-
 gcc/calls.c             |  10 +--
 gcc/cfgbuild.c          |   4 +-
 gcc/cfgcleanup.c        |   2 +-
 gcc/cfgexpand.c         |   8 +-
 gcc/cfgrtl.c            |   4 +-
 gcc/combine.c           | 162 ++++++++++++++++++++--------------------
 gcc/cprop.c             |   2 +-
 gcc/cse.c               |  80 ++++++++++----------
 gcc/cselib.c            |   6 +-
 gcc/dbxout.c            |  20 ++---
 gcc/defaults.h          |   2 +-
 gcc/df-core.c           |   4 +-
 gcc/df-problems.c       |   6 +-
 gcc/df-scan.c           |  18 ++---
 gcc/df.h                |   4 +-
 gcc/dojump.c            |   2 +-
 gcc/dse.c               |   6 +-
 gcc/dwarf2asm.c         |   6 +-
 gcc/dwarf2out.c         |  52 ++++++-------
 gcc/emit-rtl.c          |  16 ++--
 gcc/explow.c            |  16 ++--
 gcc/expmed.c            |  24 +++---
 gcc/expr.c              |  46 ++++++------
 gcc/final.c             |  22 +++---
 gcc/function.c          |   2 +-
 gcc/fwprop.c            |  16 ++--
 gcc/gcse-common.c       |   2 +-
 gcc/gcse.c              |   8 +-
 gcc/genattrtab.c        |  16 ++--
 gcc/genpreds.c          |  10 +--
 gcc/genrecog.c          |   2 +-
 gcc/gensupport.c        |   8 +-
 gcc/ifcvt.c             |  20 ++---
 gcc/internal-fn.c       |   6 +-
 gcc/ira-build.c         |   2 +-
 gcc/ira-conflicts.c     |   4 +-
 gcc/ira-costs.c         |  10 +--
 gcc/ira-emit.c          |   2 +-
 gcc/ira-lives.c         |  18 ++---
 gcc/ira.c               |  12 +--
 gcc/jump.c              |  36 ++++-----
 gcc/loop-doloop.c       |   2 +-
 gcc/loop-invariant.c    |   6 +-
 gcc/loop-iv.c           |  20 ++---
 gcc/loop-unroll.c       |   2 +-
 gcc/lower-subreg.c      |  16 ++--
 gcc/lra-constraints.c   |  56 +++++++-------
 gcc/lra-eliminations.c  |   8 +-
 gcc/lra.c               |   6 +-
 gcc/mode-switching.c    |  14 ++--
 gcc/modulo-sched.c      |   2 +-
 gcc/optabs.c            |   6 +-
 gcc/postreload-gcse.c   |   4 +-
 gcc/postreload.c        |  16 ++--
 gcc/print-rtl.c         |   4 +-
 gcc/read-rtl-function.c |   6 +-
 gcc/read-rtl.c          |   2 +-
 gcc/recog.c             |  12 +--
 gcc/ree.c               |  12 +--
 gcc/reg-stack.c         |  14 ++--
 gcc/regcprop.c          |   6 +-
 gcc/reginfo.c           |  18 ++---
 gcc/regrename.c         |  10 +--
 gcc/reload.c            |  68 ++++++++---------
 gcc/reload1.c           |  66 ++++++++--------
 gcc/reorg.c             |  16 ++--
 gcc/resource.c          |   6 +-
 gcc/rtl.c               |   4 +-
 gcc/rtlanal.c           |  46 ++++++------
 gcc/rtlhooks.c          |   2 +-
 gcc/sched-deps.c        |  12 +--
 gcc/sched-rgn.c         |   4 +-
 gcc/sel-sched.c         |   2 +-
 gcc/simplify-rtx.c      |  86 ++++++++++-----------
 gcc/symtab.c            |   4 +-
 gcc/tree-ssa-address.c  |   8 +-
 gcc/valtrack.c          |   6 +-
 gcc/var-tracking.c      |  64 ++++++++--------
 gcc/varasm.c            |  26 +++----
 gcc/xcoffout.h          |   2 +-
 86 files changed, 692 insertions(+), 692 deletions(-)

diff --git a/gcc/alias.c b/gcc/alias.c
index 2755df72907..86ada16b6ae 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -2222,7 +2222,7 @@ base_alias_check (rtx x, rtx x_base, rtx y, rtx y_base,
     return 1;
 
   /* Differing symbols not accessed via AND never alias.  */
-  if (GET_CODE (x_base) == SYMBOL_REF && GET_CODE (y_base) == SYMBOL_REF)
+  if (SYMBOL_REF_P (x_base) && SYMBOL_REF_P (y_base))
     return compare_base_symbol_refs (x_base, y_base) != 0;
 
   if (GET_CODE (x_base) != ADDRESS && GET_CODE (y_base) != ADDRESS)
@@ -2444,7 +2444,7 @@ memrefs_conflict_p (poly_int64 xsize, rtx x, poly_int64 ysize, rtx y,
   else
     y = addr_side_effect_eval (y, maybe_lt (ysize, 0) ? -ysize : ysize, 0);
 
-  if (GET_CODE (x) == SYMBOL_REF && GET_CODE (y) == SYMBOL_REF)
+  if (SYMBOL_REF_P (x) && SYMBOL_REF_P (y))
     {
       int cmp = compare_base_symbol_refs (x,y);
 
@@ -2954,8 +2954,8 @@ true_dependence_1 (const_rtx mem, machine_mode mem_mode, rtx mem_addr,
     return 1;
 
   base = find_base_term (x_addr);
-  if (base && (GET_CODE (base) == LABEL_REF
-	       || (GET_CODE (base) == SYMBOL_REF
+  if (base && (LABEL_REF_P (base)
+	       || (SYMBOL_REF_P (base)
 		   && CONSTANT_POOL_ADDRESS_P (base))))
     return 0;
 
@@ -3063,8 +3063,8 @@ write_dependence_p (const_rtx mem,
   base = find_base_term (true_mem_addr);
   if (! writep
       && base
-      && (GET_CODE (base) == LABEL_REF
-	  || (GET_CODE (base) == SYMBOL_REF
+      && (LABEL_REF_P (base)
+	  || (SYMBOL_REF_P (base)
 	      && CONSTANT_POOL_ADDRESS_P (base))))
     return 0;
 
diff --git a/gcc/asan.c b/gcc/asan.c
index a731bd490b4..9976f709a88 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -1805,7 +1805,7 @@ asan_protect_global (tree decl, bool ignore_decl_rtl_set_p)
     {
 
       rtl = DECL_RTL (decl);
-      if (!MEM_P (rtl) || GET_CODE (XEXP (rtl, 0)) != SYMBOL_REF)
+      if (!MEM_P (rtl) || !SYMBOL_REF_P (XEXP (rtl, 0)))
 	return false;
       symbol = XEXP (rtl, 0);
 
diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index 0ac39140c6c..43623e00d76 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -2167,7 +2167,7 @@ fix_crossing_conditional_branches (void)
 		  new_label = gen_label_rtx ();
 		  emit_label (new_label);
 
-		  gcc_assert (GET_CODE (old_label) == LABEL_REF);
+		  gcc_assert (LABEL_REF_P (old_label));
 		  old_jump_target = old_jump_insn->jump_target ();
 		  new_jump = as_a <rtx_jump_insn *>
 		    (emit_jump_insn (targetm.gen_jump (old_jump_target)));
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index f68879ca49a..11eee41e9d5 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -226,7 +226,7 @@ insn_sets_btr_p (const rtx_insn *insn, int check_const, int *regno)
       rtx dest = SET_DEST (set);
       rtx src = SET_SRC (set);
 
-      if (GET_CODE (dest) == SUBREG)
+      if (SUBREG_P (dest))
 	dest = XEXP (dest, 0);
 
       if (REG_P (dest)
diff --git a/gcc/builtins.c b/gcc/builtins.c
index 695a9d191af..c01c9d4c789 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -1796,7 +1796,7 @@ expand_builtin_apply (rtx function, rtx arguments, rtx argsize)
   /* Ensure address is valid.  SYMBOL_REF is already valid, so no need,
      and we don't want to load it into a register as an optimization,
      because prepare_call_address already did it if it should be done.  */
-  if (GET_CODE (function) != SYMBOL_REF)
+  if (!SYMBOL_REF_P (function))
     function = memory_address (FUNCTION_MODE, function);
 
   /* Generate the actual call instruction and save the return value.  */
diff --git a/gcc/caller-save.c b/gcc/caller-save.c
index 7c1de894976..e816d080b82 100644
--- a/gcc/caller-save.c
+++ b/gcc/caller-save.c
@@ -955,7 +955,7 @@ mark_set_regs (rtx reg, const_rtx setter ATTRIBUTE_UNUSED, void *data)
   int regno, endregno, i;
   HARD_REG_SET *this_insn_sets = (HARD_REG_SET *) data;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     {
       rtx inner = SUBREG_REG (reg);
       if (!REG_P (inner) || REGNO (inner) >= FIRST_PSEUDO_REGISTER)
@@ -990,7 +990,7 @@ add_stored_regs (rtx reg, const_rtx setter, void *data)
   if (GET_CODE (setter) == CLOBBER)
     return;
 
-  if (GET_CODE (reg) == SUBREG
+  if (SUBREG_P (reg)
       && REG_P (SUBREG_REG (reg))
       && REGNO (SUBREG_REG (reg)) < FIRST_PSEUDO_REGISTER)
     {
diff --git a/gcc/calls.c b/gcc/calls.c
index 7507b698e27..7be7d3b7211 100644
--- a/gcc/calls.c
+++ b/gcc/calls.c
@@ -217,7 +217,7 @@ prepare_call_address (tree fndecl_or_type, rtx funexp, rtx static_chain_value,
 {
   /* Make a valid memory address and copy constants through pseudo-regs,
      but not for a constant address if -fno-function-cse.  */
-  if (GET_CODE (funexp) != SYMBOL_REF)
+  if (!SYMBOL_REF_P (funexp))
     {
       /* If it's an indirect call by descriptor, generate code to perform
 	 runtime identification of the pointer and load the descriptor.  */
@@ -393,7 +393,7 @@ emit_call_1 (rtx funexp, tree fntree ATTRIBUTE_UNUSED, tree fndecl ATTRIBUTE_UNU
   /* Ensure address is valid.  SYMBOL_REF is already valid, so no need,
      and we don't want to load it into a register as an optimization,
      because prepare_call_address already did it if it should be done.  */
-  if (GET_CODE (funexp) != SYMBOL_REF)
+  if (!SYMBOL_REF_P (funexp))
     funexp = memory_address (FUNCTION_MODE, funexp);
 
   funmem = gen_rtx_MEM (FUNCTION_MODE, funexp);
@@ -1004,7 +1004,7 @@ precompute_register_parameters (int num_actuals, struct arg_data *args,
 	   loading the parameters registers.  */
 
 	else if ((! (REG_P (args[i].value)
-		     || (GET_CODE (args[i].value) == SUBREG
+		     || (SUBREG_P (args[i].value)
 			 && REG_P (SUBREG_REG (args[i].value)))))
 		 && args[i].mode != BLKmode
 		 && (set_src_cost (args[i].value, args[i].mode,
@@ -4286,7 +4286,7 @@ expand_call (tree exp, rtx target, int ignore)
 	    {
 	      datum = XEXP (DECL_RTL (fndecl), 0);
 	      gcc_assert (datum != NULL_RTX
-			  && GET_CODE (datum) == SYMBOL_REF);
+			  && SYMBOL_REF_P (datum));
 	    }
 	  last = last_call_insn ();
 	  add_reg_note (last, REG_CALL_DECL, datum);
@@ -5333,7 +5333,7 @@ emit_library_call_value_1 (int retval, rtx orgfun, rtx value,
   if (flag_ipa_ra)
     {
       rtx datum = orgfun;
-      gcc_assert (GET_CODE (datum) == SYMBOL_REF);
+      gcc_assert (SYMBOL_REF_P (datum));
       rtx_call_insn *last = last_call_insn ();
       add_reg_note (last, REG_CALL_DECL, datum);
     }
diff --git a/gcc/cfgbuild.c b/gcc/cfgbuild.c
index 934325c6538..8ac13b753dd 100644
--- a/gcc/cfgbuild.c
+++ b/gcc/cfgbuild.c
@@ -273,7 +273,7 @@ make_edges (basic_block min, basic_block max, int update_p)
 	      if ((tmp = single_set (insn)) != NULL
 		  && SET_DEST (tmp) == pc_rtx
 		  && GET_CODE (SET_SRC (tmp)) == IF_THEN_ELSE
-		  && GET_CODE (XEXP (SET_SRC (tmp), 2)) == LABEL_REF)
+		  && LABEL_REF_P (XEXP (SET_SRC (tmp), 2)))
 		make_label_edge (edge_cache, bb,
 				 label_ref_label (XEXP (SET_SRC (tmp), 2)), 0);
 	    }
@@ -414,7 +414,7 @@ purge_dead_tablejump_edges (basic_block bb, rtx_jump_table_data *table)
   if ((tmp = single_set (insn)) != NULL
        && SET_DEST (tmp) == pc_rtx
        && GET_CODE (SET_SRC (tmp)) == IF_THEN_ELSE
-       && GET_CODE (XEXP (SET_SRC (tmp), 2)) == LABEL_REF)
+       && LABEL_REF_P (XEXP (SET_SRC (tmp), 2)))
     mark_tablejump_edge (label_ref_label (XEXP (SET_SRC (tmp), 2)));
 
   for (ei = ei_start (bb->succs); (e = ei_safe_edge (ei)); )
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index b9307631e1c..e6d3b00364f 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -1209,7 +1209,7 @@ old_insns_match_p (int mode ATTRIBUTE_UNUSED, rtx_insn *i1, rtx_insn *i2)
       if (flag_sanitize & SANITIZE_ADDRESS)
 	{
 	  rtx call = get_call_rtx_from (i1);
-	  if (call && GET_CODE (XEXP (XEXP (call, 0), 0)) == SYMBOL_REF)
+	  if (call && SYMBOL_REF_P (XEXP (XEXP (call, 0), 0)))
 	    {
 	      rtx symbol = XEXP (XEXP (call, 0), 0);
 	      if (SYMBOL_REF_DECL (symbol)
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 33af991573f..8f573140d62 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -3786,7 +3786,7 @@ expand_gimple_stmt_1 (gimple *stmt)
 	    bool promoted = false;
 
 	    target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
-	    if (GET_CODE (target) == SUBREG && SUBREG_PROMOTED_VAR_P (target))
+	    if (SUBREG_P (target) && SUBREG_PROMOTED_VAR_P (target))
 	      promoted = true;
 
 	    ops.code = gimple_assign_rhs_code (assign_stmt);
@@ -4411,7 +4411,7 @@ expand_debug_expr (tree exp)
 
 	  op0 = make_decl_rtl_for_debug (exp);
 	  if (!MEM_P (op0)
-	      || GET_CODE (XEXP (op0, 0)) != SYMBOL_REF
+	      || !SYMBOL_REF_P (XEXP (op0, 0))
 	      || SYMBOL_REF_DECL (XEXP (op0, 0)) != exp)
 	    return NULL;
 	}
@@ -5481,8 +5481,8 @@ expand_debug_locations (void)
 	    gcc_assert (mode == GET_MODE (val)
 			|| (GET_MODE (val) == VOIDmode
 			    && (CONST_SCALAR_INT_P (val)
-				|| GET_CODE (val) == CONST_FIXED
-				|| GET_CODE (val) == LABEL_REF)));
+				|| CONST_FIXED_P (val)
+				|| LABEL_REF_P (val))));
 	  }
 
 	INSN_VAR_LOCATION_LOC (insn) = val;
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 1f222aea5d1..6e5ccd46c22 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -1217,7 +1217,7 @@ patch_jump_insn (rtx_insn *insn, rtx_insn *old_label, basic_block new_bb)
       if ((tmp = single_set (insn)) != NULL
 	  && SET_DEST (tmp) == pc_rtx
 	  && GET_CODE (SET_SRC (tmp)) == IF_THEN_ELSE
-	  && GET_CODE (XEXP (SET_SRC (tmp), 2)) == LABEL_REF
+	  && LABEL_REF_P (XEXP (SET_SRC (tmp), 2))
 	  && label_ref_label (XEXP (SET_SRC (tmp), 2)) == old_label)
 	{
 	  XEXP (SET_SRC (tmp), 2) = gen_rtx_LABEL_REF (Pmode,
@@ -1238,7 +1238,7 @@ patch_jump_insn (rtx_insn *insn, rtx_insn *old_label, basic_block new_bb)
       for (i = 0; i < n; ++i)
 	{
 	  rtx old_ref = ASM_OPERANDS_LABEL (tmp, i);
-	  gcc_assert (GET_CODE (old_ref) == LABEL_REF);
+	  gcc_assert (LABEL_REF_P (old_ref));
 	  if (XEXP (old_ref, 0) == old_label)
 	    {
 	      ASM_OPERANDS_LABEL (tmp, i)
diff --git a/gcc/combine.c b/gcc/combine.c
index f7b1ebc8cc0..1f43dd02dab 100644
--- a/gcc/combine.c
+++ b/gcc/combine.c
@@ -582,7 +582,7 @@ find_single_use_1 (rtx dest, rtx *loc)
       if (GET_CODE (SET_DEST (x)) != CC0
 	  && GET_CODE (SET_DEST (x)) != PC
 	  && !REG_P (SET_DEST (x))
-	  && ! (GET_CODE (SET_DEST (x)) == SUBREG
+	  && ! (SUBREG_P (SET_DEST (x))
 		&& REG_P (SUBREG_REG (SET_DEST (x)))
 		&& !read_modify_subreg_p (SET_DEST (x))))
 	break;
@@ -738,7 +738,7 @@ do_SUBST (rtx *into, rtx newval)
 	 when do_SUBST is called to replace the operand thereof, so we
 	 perform this test on oldval instead, checking whether an
 	 invalid replacement took place before we got here.  */
-      gcc_assert (!(GET_CODE (oldval) == SUBREG
+      gcc_assert (!(SUBREG_P (oldval)
 		    && CONST_INT_P (SUBREG_REG (oldval))));
       gcc_assert (!(GET_CODE (oldval) == ZERO_EXTEND
 		    && CONST_INT_P (XEXP (oldval, 0))));
@@ -2222,7 +2222,7 @@ combinable_i3pat (rtx_insn *i3, rtx *loc, rtx i2dest, rtx i1dest, rtx i0dest,
       rtx subdest;
 
       while (GET_CODE (inner_dest) == STRICT_LOW_PART
-	     || GET_CODE (inner_dest) == SUBREG
+	     || SUBREG_P (inner_dest)
 	     || GET_CODE (inner_dest) == ZERO_EXTRACT)
 	inner_dest = XEXP (inner_dest, 0);
 
@@ -2265,7 +2265,7 @@ combinable_i3pat (rtx_insn *i3, rtx *loc, rtx i2dest, rtx i1dest, rtx i0dest,
 	 STACK_POINTER_REGNUM, since these are always considered to be
 	 live.  Similarly for ARG_POINTER_REGNUM if it is fixed.  */
       subdest = dest;
-      if (GET_CODE (subdest) == SUBREG && !partial_subreg_p (subdest))
+      if (SUBREG_P (subdest) && !partial_subreg_p (subdest))
 	subdest = SUBREG_REG (subdest);
       if (pi3dest_killed
 	  && REG_P (subdest)
@@ -2352,9 +2352,9 @@ cant_combine_insn_p (rtx_insn *insn)
     return 0;
   src = SET_SRC (set);
   dest = SET_DEST (set);
-  if (GET_CODE (src) == SUBREG)
+  if (SUBREG_P (src))
     src = SUBREG_REG (src);
-  if (GET_CODE (dest) == SUBREG)
+  if (SUBREG_P (dest))
     dest = SUBREG_REG (dest);
   if (REG_P (src) && REG_P (dest)
       && ((HARD_REGISTER_P (src)
@@ -2476,7 +2476,7 @@ adjust_for_new_dest (rtx_insn *insn)
 
   while (GET_CODE (reg) == ZERO_EXTRACT
 	 || GET_CODE (reg) == STRICT_LOW_PART
-	 || GET_CODE (reg) == SUBREG)
+	 || SUBREG_P (reg))
     reg = XEXP (reg, 0);
   gcc_assert (REG_P (reg));
 
@@ -2530,7 +2530,7 @@ reg_subword_p (rtx x, rtx reg)
       || GET_CODE (x) == ZERO_EXTRACT)
     x = XEXP (x, 0);
 
-  return GET_CODE (x) == SUBREG
+  return SUBREG_P (x)
 	 && SUBREG_REG (x) == reg
 	 && GET_MODE_CLASS (GET_MODE (x)) == MODE_INT;
 }
@@ -2779,13 +2779,13 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
       if ((set0 = single_set (i0))
 	  /* Ensure the source of SET0 is a MEM, possibly buried inside
 	     an extension.  */
-	  && (GET_CODE (SET_SRC (set0)) == MEM
+	  && (MEM_P (SET_SRC (set0))
 	      || ((GET_CODE (SET_SRC (set0)) == ZERO_EXTEND
 		   || GET_CODE (SET_SRC (set0)) == SIGN_EXTEND)
-		  && GET_CODE (XEXP (SET_SRC (set0), 0)) == MEM))
+		  && MEM_P (XEXP (SET_SRC (set0), 0))))
 	  && (set3 = single_set (i3))
 	  /* Ensure the destination of SET3 is a MEM.  */
-	  && GET_CODE (SET_DEST (set3)) == MEM
+	  && MEM_P (SET_DEST (set3))
 	  /* Would it be better to extract the base address for the MEM
 	     in SET3 and look for that?  I don't have cases where it matters
 	     but I could envision such cases.  */
@@ -3625,7 +3625,7 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
 
       if (((REG_P (SET_DEST (set1))
 	    && find_reg_note (i3, REG_UNUSED, SET_DEST (set1)))
-	   || (GET_CODE (SET_DEST (set1)) == SUBREG
+	   || (SUBREG_P (SET_DEST (set1))
 	       && find_reg_note (i3, REG_UNUSED, SUBREG_REG (SET_DEST (set1)))))
 	  && insn_nothrow_p (i3)
 	  && !side_effects_p (SET_SRC (set1)))
@@ -3636,14 +3636,14 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
 
       else if (((REG_P (SET_DEST (set0))
 		 && find_reg_note (i3, REG_UNUSED, SET_DEST (set0)))
-		|| (GET_CODE (SET_DEST (set0)) == SUBREG
+		|| (SUBREG_P (SET_DEST (set0))
 		    && find_reg_note (i3, REG_UNUSED,
 				      SUBREG_REG (SET_DEST (set0)))))
 	       && insn_nothrow_p (i3)
 	       && !side_effects_p (SET_SRC (set0)))
 	{
 	  rtx dest = SET_DEST (set1);
-	  if (GET_CODE (dest) == SUBREG)
+	  if (SUBREG_P (dest))
 	    dest = SUBREG_REG (dest);
 	  if (!reg_used_between_p (dest, i2, i3))
 	    {
@@ -3800,12 +3800,12 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
 
 	      while (GET_CODE (new_i3_dest) == ZERO_EXTRACT
 		     || GET_CODE (new_i3_dest) == STRICT_LOW_PART
-		     || GET_CODE (new_i3_dest) == SUBREG)
+		     || SUBREG_P (new_i3_dest))
 		new_i3_dest = XEXP (new_i3_dest, 0);
 
 	      while (GET_CODE (new_i2_dest) == ZERO_EXTRACT
 		     || GET_CODE (new_i2_dest) == STRICT_LOW_PART
-		     || GET_CODE (new_i2_dest) == SUBREG)
+		     || SUBREG_P (new_i2_dest))
 		new_i2_dest = XEXP (new_i2_dest, 0);
 
 	      if (REG_P (new_i3_dest)
@@ -4037,7 +4037,7 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
 			       HOST_BITS_PER_INT)
 		  && (reg_stat[REGNO (temp_expr)].nonzero_bits
 		      != GET_MODE_MASK (word_mode))))
-	   && ! (GET_CODE (SET_DEST (XVECEXP (newpat, 0, 1))) == SUBREG
+	   && ! (SUBREG_P (SET_DEST (XVECEXP (newpat, 0, 1)))
 		 && (temp_expr = SUBREG_REG (SET_DEST (XVECEXP (newpat, 0, 1))),
 		     (REG_P (temp_expr)
 		      && reg_stat[REGNO (temp_expr)].nonzero_bits != 0
@@ -4112,7 +4112,7 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
       if (!modified_between_p (SET_SRC (set1), i2, i3)
 	  && !(REG_P (SET_DEST (set1))
 	       && find_reg_note (i2, REG_DEAD, SET_DEST (set1)))
-	  && !(GET_CODE (SET_DEST (set1)) == SUBREG
+	  && !(SUBREG_P (SET_DEST (set1))
 	       && find_reg_note (i2, REG_DEAD,
 				 SUBREG_REG (SET_DEST (set1))))
 	  && !modified_between_p (SET_DEST (set1), i2, i3)
@@ -4128,7 +4128,7 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn *i1, rtx_insn *i0,
       else if (!modified_between_p (SET_SRC (set0), i2, i3)
 	       && !(REG_P (SET_DEST (set0))
 		    && find_reg_note (i2, REG_DEAD, SET_DEST (set0)))
-	       && !(GET_CODE (SET_DEST (set0)) == SUBREG
+	       && !(SUBREG_P (SET_DEST (set0))
 		    && find_reg_note (i2, REG_DEAD,
 				      SUBREG_REG (SET_DEST (set0))))
 	       && !modified_between_p (SET_DEST (set0), i2, i3)
@@ -4971,7 +4971,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
       /* If we have (mem (const ..)) or (mem (symbol_ref ...)), split it
 	 using LO_SUM and HIGH.  */
       if (HAVE_lo_sum && (GET_CODE (XEXP (x, 0)) == CONST
-			  || GET_CODE (XEXP (x, 0)) == SYMBOL_REF))
+			  || SYMBOL_REF_P (XEXP (x, 0))))
 	{
 	  machine_mode address_mode = get_address_mode (x);
 
@@ -5045,7 +5045,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
 	  if (GET_CODE (XEXP (XEXP (x, 0), 0)) == PLUS
 	      && !OBJECT_P (XEXP (XEXP (XEXP (x, 0), 0), 0))
 	      && OBJECT_P (XEXP (XEXP (XEXP (x, 0), 0), 1))
-	      && ! (GET_CODE (XEXP (XEXP (XEXP (x, 0), 0), 0)) == SUBREG
+	      && ! (SUBREG_P (XEXP (XEXP (XEXP (x, 0), 0), 0))
 		    && OBJECT_P (SUBREG_REG (XEXP (XEXP (XEXP (x, 0),
 							 0), 0)))))
 	    {
@@ -5062,7 +5062,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
 	  else if (GET_CODE (XEXP (XEXP (x, 0), 0)) == PLUS
 		   && OBJECT_P (XEXP (XEXP (XEXP (x, 0), 0), 0))
 		   && !OBJECT_P (XEXP (XEXP (XEXP (x, 0), 0), 1))
-		   && ! (GET_CODE (XEXP (XEXP (XEXP (x, 0), 0), 1)) == SUBREG
+		   && ! (SUBREG_P (XEXP (XEXP (XEXP (x, 0), 0), 1))
 			 && OBJECT_P (SUBREG_REG (XEXP (XEXP (XEXP (x, 0),
 							      0), 1)))))
 	    {
@@ -5082,7 +5082,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
 	     This will occur on machines that just support REG + CONST
 	     and have a constant moved through some previous computation.  */
 	  if (!OBJECT_P (XEXP (XEXP (x, 0), 0))
-	      && ! (GET_CODE (XEXP (XEXP (x, 0), 0)) == SUBREG
+	      && ! (SUBREG_P (XEXP (XEXP (x, 0), 0))
 		    && OBJECT_P (SUBREG_REG (XEXP (XEXP (x, 0), 0)))))
 	    return &XEXP (XEXP (x, 0), 0);
 	}
@@ -5093,7 +5093,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
           && ! memory_address_addr_space_p (GET_MODE (x), XEXP (x, 0),
 					    MEM_ADDR_SPACE (x))
           && ! OBJECT_P (XEXP (XEXP (x, 0), 0))
-          && ! (GET_CODE (XEXP (XEXP (x, 0), 0)) == SUBREG
+          && ! (SUBREG_P (XEXP (XEXP (x, 0), 0))
                 && OBJECT_P (SUBREG_REG (XEXP (XEXP (x, 0), 0)))))
         return &XEXP (XEXP (x, 0), 0);
       break;
@@ -5108,7 +5108,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
 	  && GET_CODE (SET_SRC (x)) != COMPARE
 	  && GET_CODE (SET_SRC (x)) != ZERO_EXTRACT
 	  && !OBJECT_P (SET_SRC (x))
-	  && ! (GET_CODE (SET_SRC (x)) == SUBREG
+	  && ! (SUBREG_P (SET_SRC (x))
 		&& OBJECT_P (SUBREG_REG (SET_SRC (x)))))
 	return &SET_SRC (x);
 
@@ -5313,7 +5313,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
       if (BINARY_P (SET_SRC (x))
 	  && CONSTANT_P (XEXP (SET_SRC (x), 1))
 	  && (OBJECT_P (XEXP (SET_SRC (x), 0))
-	      || (GET_CODE (XEXP (SET_SRC (x), 0)) == SUBREG
+	      || (SUBREG_P (XEXP (SET_SRC (x), 0))
 		  && OBJECT_P (SUBREG_REG (XEXP (SET_SRC (x), 0))))))
 	return &XEXP (SET_SRC (x), 1);
 
@@ -5361,7 +5361,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
 	 constant.  It may be better to try splitting (plus (mult B -C) A)
 	 instead if this isn't a multiply by a power of two.  */
       if (set_src && code == MINUS && GET_CODE (XEXP (x, 1)) == MULT
-	  && GET_CODE (XEXP (XEXP (x, 1), 1)) == CONST_INT
+	  && CONST_INT_P (XEXP (XEXP (x, 1), 1))
 	  && !pow2p_hwi (INTVAL (XEXP (XEXP (x, 1), 1))))
 	{
 	  machine_mode mode = GET_MODE (x);
@@ -5382,7 +5382,7 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src)
       if (!set_src
 	  && (GET_CODE (XEXP (x, 0)) == MULT
 	      || (GET_CODE (XEXP (x, 0)) == ASHIFT
-		  && GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT)))
+		  && CONST_INT_P (XEXP (XEXP (x, 0), 1)))))
         return loc;
 
     default:
@@ -5626,7 +5626,7 @@ subst (rtx x, rtx from, rtx to, int in_dest, int in_cond, int unique_copy)
 		     tieable and it is valid if X is a SET that copies
 		     FROM to CC0.  */
 
-		  if (GET_CODE (to) == SUBREG
+		  if (SUBREG_P (to)
 		      && !targetm.modes_tieable_p (GET_MODE (to),
 						   GET_MODE (SUBREG_REG (to)))
 		      && ! (code == SUBREG
@@ -5676,7 +5676,7 @@ subst (rtx x, rtx from, rtx to, int in_dest, int in_cond, int unique_copy)
 	      if (GET_CODE (new_rtx) == CLOBBER && XEXP (new_rtx, 0) == const0_rtx)
 		return new_rtx;
 
-	      if (GET_CODE (x) == SUBREG && CONST_SCALAR_INT_P (new_rtx))
+	      if (SUBREG_P (x) && CONST_SCALAR_INT_P (new_rtx))
 		{
 		  machine_mode mode = GET_MODE (x);
 
@@ -5830,14 +5830,14 @@ combine_simplify_rtx (rtx x, machine_mode op0_mode, int in_dest,
 
   if ((BINARY_P (x)
        && ((!OBJECT_P (XEXP (x, 0))
-	    && ! (GET_CODE (XEXP (x, 0)) == SUBREG
+	    && ! (SUBREG_P (XEXP (x, 0))
 		  && OBJECT_P (SUBREG_REG (XEXP (x, 0)))))
 	   || (!OBJECT_P (XEXP (x, 1))
-	       && ! (GET_CODE (XEXP (x, 1)) == SUBREG
+	       && ! (SUBREG_P (XEXP (x, 1))
 		     && OBJECT_P (SUBREG_REG (XEXP (x, 1)))))))
       || (UNARY_P (x)
 	  && (!OBJECT_P (XEXP (x, 0))
-	       && ! (GET_CODE (XEXP (x, 0)) == SUBREG
+	       && ! (SUBREG_P (XEXP (x, 0))
 		     && OBJECT_P (SUBREG_REG (XEXP (x, 0)))))))
     {
       rtx cond, true_rtx, false_rtx;
@@ -6049,7 +6049,7 @@ combine_simplify_rtx (rtx x, machine_mode op0_mode, int in_dest,
 	 complex if it was just a register.  */
 
       if (!REG_P (temp)
-	  && ! (GET_CODE (temp) == SUBREG
+	  && ! (SUBREG_P (temp)
 		&& REG_P (SUBREG_REG (temp)))
 	  && is_a <scalar_int_mode> (mode, &int_mode)
 	  && (i = exact_log2 (nonzero_bits (temp, int_mode))) >= 0)
@@ -6151,7 +6151,7 @@ combine_simplify_rtx (rtx x, machine_mode op0_mode, int in_dest,
 	 "a = (b & 8) == 0;"  */
       if (XEXP (x, 1) == constm1_rtx
 	  && !REG_P (XEXP (x, 0))
-	  && ! (GET_CODE (XEXP (x, 0)) == SUBREG
+	  && ! (SUBREG_P (XEXP (x, 0))
 		&& REG_P (SUBREG_REG (XEXP (x, 0))))
 	  && is_a <scalar_int_mode> (mode, &int_mode)
 	  && nonzero_bits (XEXP (x, 0), int_mode) == 1)
@@ -6587,7 +6587,7 @@ simplify_if_then_else (rtx x)
 	      && !CONST_INT_P (false_rtx) && false_rtx != pc_rtx)
 	  || true_rtx == const0_rtx
 	  || (OBJECT_P (true_rtx) && !OBJECT_P (false_rtx))
-	  || (GET_CODE (true_rtx) == SUBREG && OBJECT_P (SUBREG_REG (true_rtx))
+	  || (SUBREG_P (true_rtx) && OBJECT_P (SUBREG_REG (true_rtx))
 	      && !OBJECT_P (false_rtx))
 	  || reg_mentioned_p (true_rtx, false_rtx)
 	  || rtx_equal_p (false_rtx, XEXP (cond, 0))))
@@ -6712,7 +6712,7 @@ simplify_if_then_else (rtx x)
 		   || GET_CODE (XEXP (t, 0)) == ASHIFT
 		   || GET_CODE (XEXP (t, 0)) == LSHIFTRT
 		   || GET_CODE (XEXP (t, 0)) == ASHIFTRT)
-	       && GET_CODE (XEXP (XEXP (t, 0), 0)) == SUBREG
+	       && SUBREG_P (XEXP (XEXP (t, 0), 0))
 	       && subreg_lowpart_p (XEXP (XEXP (t, 0), 0))
 	       && rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 0)), f)
 	       && (num_sign_bit_copies (f, GET_MODE (f))
@@ -6729,7 +6729,7 @@ simplify_if_then_else (rtx x)
 	       && (GET_CODE (XEXP (t, 0)) == PLUS
 		   || GET_CODE (XEXP (t, 0)) == IOR
 		   || GET_CODE (XEXP (t, 0)) == XOR)
-	       && GET_CODE (XEXP (XEXP (t, 0), 1)) == SUBREG
+	       && SUBREG_P (XEXP (XEXP (t, 0), 1))
 	       && subreg_lowpart_p (XEXP (XEXP (t, 0), 1))
 	       && rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 1)), f)
 	       && (num_sign_bit_copies (f, GET_MODE (f))
@@ -6750,7 +6750,7 @@ simplify_if_then_else (rtx x)
 		   || GET_CODE (XEXP (t, 0)) == ASHIFT
 		   || GET_CODE (XEXP (t, 0)) == LSHIFTRT
 		   || GET_CODE (XEXP (t, 0)) == ASHIFTRT)
-	       && GET_CODE (XEXP (XEXP (t, 0), 0)) == SUBREG
+	       && SUBREG_P (XEXP (XEXP (t, 0), 0))
 	       && HWI_COMPUTABLE_MODE_P (int_mode)
 	       && subreg_lowpart_p (XEXP (XEXP (t, 0), 0))
 	       && rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 0)), f)
@@ -6767,7 +6767,7 @@ simplify_if_then_else (rtx x)
 	       && (GET_CODE (XEXP (t, 0)) == PLUS
 		   || GET_CODE (XEXP (t, 0)) == IOR
 		   || GET_CODE (XEXP (t, 0)) == XOR)
-	       && GET_CODE (XEXP (XEXP (t, 0), 1)) == SUBREG
+	       && SUBREG_P (XEXP (XEXP (t, 0), 1))
 	       && HWI_COMPUTABLE_MODE_P (int_mode)
 	       && subreg_lowpart_p (XEXP (XEXP (t, 0), 1))
 	       && rtx_equal_p (SUBREG_REG (XEXP (XEXP (t, 0), 1)), f)
@@ -7051,7 +7051,7 @@ simplify_set (rtx x)
      be undefined.  On machine where it is defined, this transformation is safe
      as long as M1 and M2 have the same number of words.  */
 
-  if (GET_CODE (src) == SUBREG && subreg_lowpart_p (src)
+  if (SUBREG_P (src) && subreg_lowpart_p (src)
       && !OBJECT_P (SUBREG_REG (src))
       && (known_equal_after_align_up
 	  (GET_MODE_SIZE (GET_MODE (src)),
@@ -7063,7 +7063,7 @@ simplify_set (rtx x)
 				       GET_MODE (SUBREG_REG (src)),
 				       GET_MODE (src)))
       && (REG_P (dest)
-	  || (GET_CODE (dest) == SUBREG
+	  || (SUBREG_P (dest)
 	      && REG_P (SUBREG_REG (dest)))))
     {
       SUBST (SET_DEST (x),
@@ -7387,7 +7387,7 @@ expand_compound_operation (rtx x)
 	return XEXP (XEXP (x, 0), 0);
 
       /* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)).  */
-      if (GET_CODE (XEXP (x, 0)) == SUBREG
+      if (SUBREG_P (XEXP (x, 0))
 	  && GET_MODE (SUBREG_REG (XEXP (x, 0))) == mode
 	  && subreg_lowpart_p (XEXP (x, 0))
 	  && HWI_COMPUTABLE_MODE_P (mode)
@@ -7407,7 +7407,7 @@ expand_compound_operation (rtx x)
 	return XEXP (XEXP (x, 0), 0);
 
       /* Likewise for (zero_extend:DI (subreg:SI foo:DI 0)).  */
-      if (GET_CODE (XEXP (x, 0)) == SUBREG
+      if (SUBREG_P (XEXP (x, 0))
 	  && GET_MODE (SUBREG_REG (XEXP (x, 0))) == mode
 	  && subreg_lowpart_p (XEXP (x, 0))
 	  && COMPARISON_P (SUBREG_REG (XEXP (x, 0)))
@@ -7482,7 +7482,7 @@ expand_field_assignment (const_rtx x)
   while (1)
     {
       if (GET_CODE (SET_DEST (x)) == STRICT_LOW_PART
-	  && GET_CODE (XEXP (SET_DEST (x), 0)) == SUBREG)
+	  && SUBREG_P (XEXP (SET_DEST (x), 0)))
 	{
 	  rtx x0 = XEXP (SET_DEST (x), 0);
 	  if (!GET_MODE_PRECISION (GET_MODE (x0)).is_constant (&len))
@@ -7524,7 +7524,7 @@ expand_field_assignment (const_rtx x)
 
       /* If the destination is a subreg that overwrites the whole of the inner
 	 register, we can move the subreg to the source.  */
-      else if (GET_CODE (SET_DEST (x)) == SUBREG
+      else if (SUBREG_P (SET_DEST (x))
 	       /* We need SUBREGs to compute nonzero_bits properly.  */
 	       && nonzero_sign_valid
 	       && !read_modify_subreg_p (SET_DEST (x)))
@@ -7538,7 +7538,7 @@ expand_field_assignment (const_rtx x)
       else
 	break;
 
-      while (GET_CODE (inner) == SUBREG && subreg_lowpart_p (inner))
+      while (SUBREG_P (inner) && subreg_lowpart_p (inner))
 	inner = SUBREG_REG (inner);
 
       /* Don't attempt bitwise arithmetic on non scalar integer modes.  */
@@ -7631,7 +7631,7 @@ make_extraction (machine_mode mode, rtx inner, HOST_WIDE_INT pos,
   if (pos_rtx && CONST_INT_P (pos_rtx))
     pos = INTVAL (pos_rtx), pos_rtx = 0;
 
-  if (GET_CODE (inner) == SUBREG
+  if (SUBREG_P (inner)
       && subreg_lowpart_p (inner)
       && (paradoxical_subreg_p (inner)
 	  /* If trying or potentionally trying to extract
@@ -7764,7 +7764,7 @@ make_extraction (machine_mode mode, rtx inner, HOST_WIDE_INT pos,
 
       if (in_dest)
 	return (MEM_P (new_rtx) ? new_rtx
-		: (GET_CODE (new_rtx) != SUBREG
+		: (!SUBREG_P (new_rtx)
 		   ? gen_rtx_CLOBBER (tmode, const0_rtx)
 		   : gen_rtx_STRICT_LOW_PART (VOIDmode, new_rtx)));
 
@@ -8189,7 +8189,7 @@ make_compound_operation_int (scalar_int_mode mode, rtx *x_ptr,
 	}
 
       /* Same as previous, but for (subreg (lshiftrt ...)) in first op.  */
-      else if (GET_CODE (XEXP (x, 0)) == SUBREG
+      else if (SUBREG_P (XEXP (x, 0))
 	       && subreg_lowpart_p (XEXP (x, 0))
 	       && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (XEXP (x, 0))),
 					  &inner_mode)
@@ -8294,7 +8294,7 @@ make_compound_operation_int (scalar_int_mode mode, rtx *x_ptr,
 	 the constant (limited to the smaller mode) has only zero bits where
 	 the sub expression has known zero bits, this can be expressed as
 	 a zero_extend.  */
-      else if (GET_CODE (XEXP (x, 0)) == SUBREG)
+      else if (SUBREG_P (XEXP (x, 0)))
 	{
 	  rtx sub;
 
@@ -8365,7 +8365,7 @@ make_compound_operation_int (scalar_int_mode mode, rtx *x_ptr,
 	 seem worth the effort; the case checked for occurs on Alpha.  */
 
       if (!OBJECT_P (lhs)
-	  && ! (GET_CODE (lhs) == SUBREG
+	  && ! (SUBREG_P (lhs)
 		&& (OBJECT_P (SUBREG_REG (lhs))))
 	  && CONST_INT_P (rhs)
 	  && INTVAL (rhs) >= 0
@@ -8410,7 +8410,7 @@ make_compound_operation_int (scalar_int_mode mode, rtx *x_ptr,
 	   to the recursive make_compound_operation call.  */
 	if (subreg_code == COMPARE
 	    && (!subreg_lowpart_p (x)
-		|| GET_CODE (inner) == SUBREG
+		|| SUBREG_P (inner)
 		/* (subreg:SI (and:DI (reg:DI) (const_int 0x800000000)) 0)
 		   is (const_int 0), rather than
 		   (subreg:SI (lshiftrt:DI (reg:DI) (const_int 35)) 0).
@@ -8440,7 +8440,7 @@ make_compound_operation_int (scalar_int_mode mode, rtx *x_ptr,
 
 	    /* If we have something other than a SUBREG, we might have
 	       done an expansion, so rerun ourselves.  */
-	    if (GET_CODE (newer) != SUBREG)
+	    if (!SUBREG_P (newer))
 	      newer = make_compound_operation (newer, in_code);
 
 	    /* force_to_mode can expand compounds.  If it just re-expanded
@@ -8450,7 +8450,7 @@ make_compound_operation_int (scalar_int_mode mode, rtx *x_ptr,
 		/* Likewise if it re-expanded the compound only partially.
 		   This happens for SUBREG of ZERO_EXTRACT if they extract
 		   the same number of bits.  */
-		|| (GET_CODE (newer) == SUBREG
+		|| (SUBREG_P (newer)
 		    && (GET_CODE (SUBREG_REG (newer)) == LSHIFTRT
 			|| GET_CODE (SUBREG_REG (newer)) == ASHIFTRT)
 		    && GET_CODE (inner) == AND
@@ -8777,7 +8777,7 @@ force_to_mode (rtx x, machine_mode mode, unsigned HOST_WIDE_INT mask,
 
   /* We can ignore the effect of a SUBREG if it narrows the mode or
      if the constant masks to zero all the bits the mode doesn't have.  */
-  if (GET_CODE (x) == SUBREG
+  if (SUBREG_P (x)
       && subreg_lowpart_p (x)
       && (partial_subreg_p (x)
 	  || (mask
@@ -9711,13 +9711,13 @@ rtx_equal_for_field_assignment_p (rtx x, rtx y, bool widen_x)
   /* Check for a paradoxical SUBREG of a MEM compared with the MEM.
      Note that all SUBREGs of MEM are paradoxical; otherwise they
      would have been rewritten.  */
-  if (MEM_P (x) && GET_CODE (y) == SUBREG
+  if (MEM_P (x) && SUBREG_P (y)
       && MEM_P (SUBREG_REG (y))
       && rtx_equal_p (SUBREG_REG (y),
 		      gen_lowpart (GET_MODE (SUBREG_REG (y)), x)))
     return 1;
 
-  if (MEM_P (y) && GET_CODE (x) == SUBREG
+  if (MEM_P (y) && SUBREG_P (x)
       && MEM_P (SUBREG_REG (x))
       && rtx_equal_p (SUBREG_REG (x),
 		      gen_lowpart (GET_MODE (SUBREG_REG (x)), y)))
@@ -9770,7 +9770,7 @@ make_field_assignment (rtx x)
       return x;
     }
 
-  if (GET_CODE (src) == AND && GET_CODE (XEXP (src, 0)) == SUBREG
+  if (GET_CODE (src) == AND && SUBREG_P (XEXP (src, 0))
       && subreg_lowpart_p (XEXP (src, 0))
       && partial_subreg_p (XEXP (src, 0))
       && GET_CODE (SUBREG_REG (XEXP (src, 0))) == ROTATE
@@ -9842,7 +9842,7 @@ make_field_assignment (rtx x)
      narrowing SUBREG, which we can just strip for the purposes of
      identifying the constant-field assignment.  */
   scalar_int_mode src_mode = mode;
-  if (GET_CODE (src) == SUBREG
+  if (SUBREG_P (src)
       && subreg_lowpart_p (src)
       && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (src)), &src_mode))
     src = SUBREG_REG (src);
@@ -11622,7 +11622,7 @@ change_zero_ext (rtx pat)
 	    }
 	}
       else if (GET_CODE (x) == ZERO_EXTEND
-	       && GET_CODE (XEXP (x, 0)) == SUBREG
+	       && SUBREG_P (XEXP (x, 0))
 	       && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (XEXP (x, 0))))
 	       && !paradoxical_subreg_p (XEXP (x, 0))
 	       && subreg_lowpart_p (XEXP (x, 0)))
@@ -11781,7 +11781,7 @@ gen_lowpart_for_combine (machine_mode omode, rtx x)
   /* X might be a paradoxical (subreg (mem)).  In that case, gen_lowpart
      won't know what to do.  So we will strip off the SUBREG here and
      process normally.  */
-  if (GET_CODE (x) == SUBREG && MEM_P (SUBREG_REG (x)))
+  if (SUBREG_P (x) && MEM_P (SUBREG_REG (x)))
     {
       x = SUBREG_REG (x);
 
@@ -12076,8 +12076,8 @@ simplify_comparison (enum rtx_code code, rtx *pop0, rtx *pop1)
 	  && GET_CODE (op0) == ASHIFTRT && GET_CODE (op1) == ASHIFTRT
 	  && GET_CODE (XEXP (op0, 0)) == ASHIFT
 	  && GET_CODE (XEXP (op1, 0)) == ASHIFT
-	  && GET_CODE (XEXP (XEXP (op0, 0), 0)) == SUBREG
-	  && GET_CODE (XEXP (XEXP (op1, 0), 0)) == SUBREG
+	  && SUBREG_P (XEXP (XEXP (op0, 0), 0))
+	  && SUBREG_P (XEXP (XEXP (op1, 0), 0))
 	  && is_a <scalar_int_mode> (GET_MODE (op0), &mode)
 	  && (is_a <scalar_int_mode>
 	      (GET_MODE (SUBREG_REG (XEXP (XEXP (op0, 0), 0))), &inner_mode))
@@ -12151,7 +12151,7 @@ simplify_comparison (enum rtx_code code, rtx *pop0, rtx *pop1)
 	  int changed = 0;
 
 	  if (paradoxical_subreg_p (inner_op0)
-	      && GET_CODE (inner_op1) == SUBREG
+	      && SUBREG_P (inner_op1)
 	      && HWI_COMPUTABLE_MODE_P (GET_MODE (SUBREG_REG (inner_op0)))
 	      && (GET_MODE (SUBREG_REG (inner_op0))
 		  == GET_MODE (SUBREG_REG (inner_op1)))
@@ -12729,7 +12729,7 @@ simplify_comparison (enum rtx_code code, rtx *pop0, rtx *pop1)
 	     fits in both M1 and M2 and the SUBREG is either paradoxical
 	     or represents the low part, permute the SUBREG and the AND
 	     and try again.  */
-	  if (GET_CODE (XEXP (op0, 0)) == SUBREG
+	  if (SUBREG_P (XEXP (op0, 0))
 	      && CONST_INT_P (XEXP (op0, 1)))
 	    {
 	      unsigned HOST_WIDE_INT c1 = INTVAL (XEXP (op0, 1));
@@ -13008,7 +13008,7 @@ simplify_comparison (enum rtx_code code, rtx *pop0, rtx *pop1)
   op0 = make_compound_operation (op0, op0_mco_code);
   op1 = make_compound_operation (op1, SET);
 
-  if (GET_CODE (op0) == SUBREG && subreg_lowpart_p (op0)
+  if (SUBREG_P (op0) && subreg_lowpart_p (op0)
       && is_int_mode (GET_MODE (op0), &mode)
       && is_int_mode (GET_MODE (SUBREG_REG (op0)), &inner_mode)
       && (code == NE || code == EQ))
@@ -13376,7 +13376,7 @@ record_dead_and_set_regs_1 (rtx dest, const_rtx setter, void *data)
 {
   rtx_insn *record_dead_insn = (rtx_insn *) data;
 
-  if (GET_CODE (dest) == SUBREG)
+  if (SUBREG_P (dest))
     dest = SUBREG_REG (dest);
 
   if (!record_dead_insn)
@@ -13397,7 +13397,7 @@ record_dead_and_set_regs_1 (rtx dest, const_rtx setter, void *data)
       if (GET_CODE (setter) == SET && dest == SET_DEST (setter))
 	record_value_for_reg (dest, record_dead_insn, SET_SRC (setter));
       else if (GET_CODE (setter) == SET
-	       && GET_CODE (SET_DEST (setter)) == SUBREG
+	       && SUBREG_P (SET_DEST (setter))
 	       && SUBREG_REG (SET_DEST (setter)) == dest
 	       && known_le (GET_MODE_PRECISION (GET_MODE (dest)),
 			    BITS_PER_WORD)
@@ -13577,7 +13577,7 @@ record_truncated_value (rtx x)
   machine_mode truncated_mode;
   reg_stat_type *rsp;
 
-  if (GET_CODE (x) == SUBREG && REG_P (SUBREG_REG (x)))
+  if (SUBREG_P (x) && REG_P (SUBREG_REG (x)))
     {
       machine_mode original_mode = GET_MODE (SUBREG_REG (x));
       truncated_mode = GET_MODE (x);
@@ -13629,7 +13629,7 @@ record_truncated_values (rtx *loc, void *data ATTRIBUTE_UNUSED)
 static void
 check_promoted_subreg (rtx_insn *insn, rtx x)
 {
-  if (GET_CODE (x) == SUBREG
+  if (SUBREG_P (x)
       && SUBREG_PROMOTED_VAR_P (x)
       && REG_P (SUBREG_REG (x)))
     record_promoted_value (insn, x);
@@ -13775,7 +13775,7 @@ get_last_value (const_rtx x)
   /* If this is a non-paradoxical SUBREG, get the value of its operand and
      then convert it to the desired mode.  If this is a paradoxical SUBREG,
      we cannot predict what values the "extra" bits might have.  */
-  if (GET_CODE (x) == SUBREG
+  if (SUBREG_P (x)
       && subreg_lowpart_p (x)
       && !paradoxical_subreg_p (x)
       && (value = get_last_value (SUBREG_REG (x))) != 0)
@@ -13982,7 +13982,7 @@ mark_used_regs_combine (rtx x)
 	   the address.  */
 	rtx testreg = SET_DEST (x);
 
-	while (GET_CODE (testreg) == SUBREG
+	while (SUBREG_P (testreg)
 	       || GET_CODE (testreg) == ZERO_EXTRACT
 	       || GET_CODE (testreg) == STRICT_LOW_PART)
 	  testreg = XEXP (testreg, 0);
@@ -14162,7 +14162,7 @@ move_deaths (rtx x, rtx maybe_kill_insn, int from_luid, rtx_insn *to_insn,
 
       if (GET_CODE (dest) == ZERO_EXTRACT
 	  || GET_CODE (dest) == STRICT_LOW_PART
-	  || (GET_CODE (dest) == SUBREG
+	  || (SUBREG_P (dest)
 	      && !read_modify_subreg_p (dest)))
 	{
 	  move_deaths (dest, maybe_kill_insn, from_luid, to_insn, pnotes);
@@ -14171,7 +14171,7 @@ move_deaths (rtx x, rtx maybe_kill_insn, int from_luid, rtx_insn *to_insn,
 
       /* If this is some other SUBREG, we know it replaces the entire
 	 value, so use that as the destination.  */
-      if (GET_CODE (dest) == SUBREG)
+      if (SUBREG_P (dest))
 	dest = SUBREG_REG (dest);
 
       /* If this is a MEM, adjust deaths of anything used in the address.
@@ -14225,7 +14225,7 @@ reg_bitfield_target_p (rtx x, rtx body)
       else
 	return 0;
 
-      if (GET_CODE (target) == SUBREG)
+      if (SUBREG_P (target))
 	target = SUBREG_REG (target);
 
       if (!REG_P (target))
@@ -14483,14 +14483,14 @@ distribute_notes (rtx notes, rtx_insn *from_insn, rtx_insn *i3, rtx_insn *i2,
 	  /* ??? Ignore the without-reg_equal-note problem for now.  */
 	  if (reg_mentioned_p (XEXP (note, 0), PATTERN (i3))
 	      || ((tem_note = find_reg_note (i3, REG_EQUAL, NULL_RTX))
-		  && GET_CODE (XEXP (tem_note, 0)) == LABEL_REF
+		  && LABEL_REF_P (XEXP (tem_note, 0))
 		  && label_ref_label (XEXP (tem_note, 0)) == XEXP (note, 0)))
 	    place = i3;
 
 	  if (i2
 	      && (reg_mentioned_p (XEXP (note, 0), PATTERN (i2))
 		  || ((tem_note = find_reg_note (i2, REG_EQUAL, NULL_RTX))
-		      && GET_CODE (XEXP (tem_note, 0)) == LABEL_REF
+		      && LABEL_REF_P (XEXP (tem_note, 0))
 		      && label_ref_label (XEXP (tem_note, 0)) == XEXP (note, 0))))
 	    {
 	      if (place)
@@ -14624,7 +14624,7 @@ distribute_notes (rtx notes, rtx_insn *from_insn, rtx_insn *i3, rtx_insn *i2,
 		      if (set != 0)
 			for (inner_dest = SET_DEST (set);
 			     (GET_CODE (inner_dest) == STRICT_LOW_PART
-			      || GET_CODE (inner_dest) == SUBREG
+			      || SUBREG_P (inner_dest)
 			      || GET_CODE (inner_dest) == ZERO_EXTRACT);
 			     inner_dest = XEXP (inner_dest, 0))
 			  ;
@@ -14892,7 +14892,7 @@ distribute_links (struct insn_link *links)
 	      reg = SET_DEST (set);
 	      while (GET_CODE (reg) == ZERO_EXTRACT
 		     || GET_CODE (reg) == STRICT_LOW_PART
-		     || GET_CODE (reg) == SUBREG)
+		     || SUBREG_P (reg))
 		reg = XEXP (reg, 0);
 
 	      if (!REG_P (reg))
@@ -14911,7 +14911,7 @@ distribute_links (struct insn_link *links)
 
       while (GET_CODE (reg) == ZERO_EXTRACT
 	     || GET_CODE (reg) == STRICT_LOW_PART
-	     || GET_CODE (reg) == SUBREG)
+	     || SUBREG_P (reg))
 	reg = XEXP (reg, 0);
 
       if (reg == pc_rtx)
diff --git a/gcc/cprop.c b/gcc/cprop.c
index 65c0130cc07..ecb383487bc 100644
--- a/gcc/cprop.c
+++ b/gcc/cprop.c
@@ -1621,7 +1621,7 @@ bypass_block (basic_block bb, rtx_insn *setcc, rtx_insn *jump)
 	      edest = FALLTHRU_EDGE (bb);
 	      dest = edest->insns.r ? NULL : edest->dest;
 	    }
-	  else if (GET_CODE (new_rtx) == LABEL_REF)
+	  else if (LABEL_REF_P (new_rtx))
 	    {
 	      dest = BLOCK_FOR_INSN (XEXP (new_rtx, 0));
 	      /* Don't bypass edges containing instructions.  */
diff --git a/gcc/cse.c b/gcc/cse.c
index 35840a6d5ca..5662dc0eeae 100644
--- a/gcc/cse.c
+++ b/gcc/cse.c
@@ -722,7 +722,7 @@ static int
 notreg_cost (rtx x, machine_mode mode, enum rtx_code outer, int opno)
 {
   scalar_int_mode int_mode, inner_mode;
-  return ((GET_CODE (x) == SUBREG
+  return ((SUBREG_P (x)
 	   && REG_P (SUBREG_REG (x))
 	   && is_int_mode (mode, &int_mode)
 	   && is_int_mode (GET_MODE (SUBREG_REG (x)), &inner_mode)
@@ -1171,7 +1171,7 @@ insert_regs (rtx x, struct table_elt *classp, int modified)
      not be accessible because its hash code will have changed.  So assign
      a quantity number now.  */
 
-  else if (GET_CODE (x) == SUBREG && REG_P (SUBREG_REG (x))
+  else if (SUBREG_P (x) && REG_P (SUBREG_REG (x))
 	   && ! REGNO_QTY_VALID_P (REGNO (SUBREG_REG (x))))
     {
       insert_regs (SUBREG_REG (x), NULL, 0);
@@ -1289,7 +1289,7 @@ find_reg_offset_for_const (struct table_elt *anchor_elt, HOST_WIDE_INT offs,
       if (REG_P (elt->exp)
 	  || (GET_CODE (elt->exp) == PLUS
 	      && REG_P (XEXP (elt->exp, 0))
-	      && GET_CODE (XEXP (elt->exp, 1)) == CONST_INT))
+	      && CONST_INT_P (XEXP (elt->exp, 1))))
 	{
 	  rtx x;
 
@@ -1828,7 +1828,7 @@ check_dependence (const_rtx x, rtx exp, machine_mode mode, rtx addr)
 static void
 invalidate_reg (rtx x, bool clobber_high)
 {
-  gcc_assert (GET_CODE (x) == REG);
+  gcc_assert (REG_P (x));
 
   /* If X is a register, dependencies on its contents are recorded
      through the qty number mechanism.  Just change the qty number of
@@ -1984,7 +1984,7 @@ static void
 invalidate_dest (rtx dest)
 {
   if (REG_P (dest)
-      || GET_CODE (dest) == SUBREG
+      || SUBREG_P (dest)
       || MEM_P (dest))
     invalidate (dest, VOIDmode);
   else if (GET_CODE (dest) == STRICT_LOW_PART
@@ -2028,7 +2028,7 @@ remove_invalid_subreg_refs (unsigned int regno, poly_uint64 offset,
 	next = p->next_same_hash;
 
 	if (!REG_P (exp)
-	    && (GET_CODE (exp) != SUBREG
+	    && (!SUBREG_P (exp)
 		|| !REG_P (SUBREG_REG (exp))
 		|| REGNO (SUBREG_REG (exp)) != regno
 		|| ranges_maybe_overlap_p (SUBREG_BYTE (exp),
@@ -2051,7 +2051,7 @@ rehash_using_reg (rtx x)
   struct table_elt *p, *next;
   unsigned hash;
 
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     x = SUBREG_REG (x);
 
   /* If X is not a register or if the register is known not to be in any
@@ -3537,13 +3537,13 @@ fold_rtx (rtx x, rtx_insn *insn)
 	     with that LABEL_REF as its second operand.  If so, the result is
 	     the first operand of that MINUS.  This handles switches with an
 	     ADDR_DIFF_VEC table.  */
-	  if (const_arg1 && GET_CODE (const_arg1) == LABEL_REF)
+	  if (const_arg1 && LABEL_REF_P (const_arg1))
 	    {
 	      rtx y
 		= GET_CODE (folded_arg0) == MINUS ? folded_arg0
 		: lookup_as_function (folded_arg0, MINUS);
 
-	      if (y != 0 && GET_CODE (XEXP (y, 1)) == LABEL_REF
+	      if (y != 0 && LABEL_REF_P (XEXP (y, 1))
 		  && label_ref_label (XEXP (y, 1)) == label_ref_label (const_arg1))
 		return XEXP (y, 0);
 
@@ -3551,19 +3551,19 @@ fold_rtx (rtx x, rtx_insn *insn)
 	      if ((y = (GET_CODE (folded_arg0) == CONST ? folded_arg0
 			: lookup_as_function (folded_arg0, CONST))) != 0
 		  && GET_CODE (XEXP (y, 0)) == MINUS
-		  && GET_CODE (XEXP (XEXP (y, 0), 1)) == LABEL_REF
+		  && LABEL_REF_P (XEXP (XEXP (y, 0), 1))
 		  && label_ref_label (XEXP (XEXP (y, 0), 1)) == label_ref_label (const_arg1))
 		return XEXP (XEXP (y, 0), 0);
 	    }
 
 	  /* Likewise if the operands are in the other order.  */
-	  if (const_arg0 && GET_CODE (const_arg0) == LABEL_REF)
+	  if (const_arg0 && LABEL_REF_P (const_arg0))
 	    {
 	      rtx y
 		= GET_CODE (folded_arg1) == MINUS ? folded_arg1
 		: lookup_as_function (folded_arg1, MINUS);
 
-	      if (y != 0 && GET_CODE (XEXP (y, 1)) == LABEL_REF
+	      if (y != 0 && LABEL_REF_P (XEXP (y, 1))
 		  && label_ref_label (XEXP (y, 1)) == label_ref_label (const_arg0))
 		return XEXP (y, 0);
 
@@ -3571,7 +3571,7 @@ fold_rtx (rtx x, rtx_insn *insn)
 	      if ((y = (GET_CODE (folded_arg1) == CONST ? folded_arg1
 			: lookup_as_function (folded_arg1, CONST))) != 0
 		  && GET_CODE (XEXP (y, 0)) == MINUS
-		  && GET_CODE (XEXP (XEXP (y, 0), 1)) == LABEL_REF
+		  && LABEL_REF_P (XEXP (XEXP (y, 0), 1))
 		  && label_ref_label (XEXP (XEXP (y, 0), 1)) == label_ref_label (const_arg0))
 		return XEXP (XEXP (y, 0), 0);
 	    }
@@ -3811,7 +3811,7 @@ equiv_constant (rtx x)
   if (x == 0 || CONSTANT_P (x))
     return x;
 
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     {
       machine_mode mode = GET_MODE (x);
       machine_mode imode = GET_MODE (SUBREG_REG (x));
@@ -4310,7 +4310,7 @@ find_sets_in_insn (rtx_insn *insn, struct set **psets)
 	 and not be misled by unchanged instructions
 	 that were unconditional jumps to begin with.  */
       if (SET_DEST (x) == pc_rtx
-	  && GET_CODE (SET_SRC (x)) == LABEL_REF)
+	  && LABEL_REF_P (SET_SRC (x)))
 	;
       /* Don't count call-insns, (set (reg 0) (call ...)), as a set.
 	 The hard function value register is used only once, to copy to
@@ -4334,7 +4334,7 @@ find_sets_in_insn (rtx_insn *insn, struct set **psets)
 	      /* As above, we ignore unconditional jumps and call-insns and
 		 ignore the result of apply_change_group.  */
 	      if (SET_DEST (y) == pc_rtx
-		  && GET_CODE (SET_SRC (y)) == LABEL_REF)
+		  && LABEL_REF_P (SET_SRC (y)))
 		;
 	      else if (GET_CODE (SET_SRC (y)) == CALL)
 		;
@@ -4517,7 +4517,7 @@ canonicalize_insn (rtx_insn *insn, struct set **psets, int n_sets)
 			   canon_reg (XEXP (dest, 2), insn), 1);
 	}
 
-      while (GET_CODE (dest) == SUBREG
+      while (SUBREG_P (dest)
 	     || GET_CODE (dest) == ZERO_EXTRACT
 	     || GET_CODE (dest) == STRICT_LOW_PART)
 	dest = XEXP (dest, 0);
@@ -4842,8 +4842,8 @@ cse_insn (rtx_insn *insn)
 		 "constant" here so we will record it. This allows us
 		 to fold switch statements when an ADDR_DIFF_VEC is used.  */
 	      || (GET_CODE (src_folded) == MINUS
-		  && GET_CODE (XEXP (src_folded, 0)) == LABEL_REF
-		  && GET_CODE (XEXP (src_folded, 1)) == LABEL_REF)))
+		  && LABEL_REF_P (XEXP (src_folded, 0))
+		  && LABEL_REF_P (XEXP (src_folded, 1)))))
 	src_const = src_folded, src_const_elt = elt;
       else if (src_const == 0 && src_eqv_here && CONSTANT_P (src_eqv_here))
 	src_const = src_eqv_here, src_const_elt = src_eqv_elt;
@@ -5029,7 +5029,7 @@ cse_insn (rtx_insn *insn)
       if (targetm.const_anchor
 	  && !src_related
 	  && src_const
-	  && GET_CODE (src_const) == CONST_INT)
+	  && CONST_INT_P (src_const))
 	{
 	  src_related = try_const_anchors (src_const, mode);
 	  src_related_is_const_anchor = src_related != NULL_RTX;
@@ -5066,7 +5066,7 @@ cse_insn (rtx_insn *insn)
 	     looking for.  */
 	  if (paradoxical_subreg_p (p->exp)
 	      && ! (src != 0
-		    && GET_CODE (src) == SUBREG
+		    && SUBREG_P (src)
 		    && GET_MODE (src) == GET_MODE (p->exp)
 		    && partial_subreg_p (GET_MODE (SUBREG_REG (src)),
 					 GET_MODE (SUBREG_REG (p->exp)))))
@@ -5155,7 +5155,7 @@ cse_insn (rtx_insn *insn)
 
       /* If this was an indirect jump insn, a known label will really be
 	 cheaper even though it looks more expensive.  */
-      if (dest == pc_rtx && src_const && GET_CODE (src_const) == LABEL_REF)
+      if (dest == pc_rtx && src_const && LABEL_REF_P (src_const))
 	src_folded = src_const, src_folded_cost = src_folded_regcost = -1;
 
       /* Terminate loop when replacement made.  This must terminate since
@@ -5177,7 +5177,7 @@ cse_insn (rtx_insn *insn)
 	      /* It is okay, though, if the rtx we're trying to match
 		 will ignore any of the bits we can't predict.  */
 	      && ! (src != 0
-		    && GET_CODE (src) == SUBREG
+		    && SUBREG_P (src)
 		    && GET_MODE (src) == GET_MODE (elt->exp)
 		    && partial_subreg_p (GET_MODE (SUBREG_REG (src)),
 					 GET_MODE (SUBREG_REG (elt->exp)))))
@@ -5341,11 +5341,11 @@ cse_insn (rtx_insn *insn)
 	     barriers).  */
 	  if (n_sets == 1 && dest == pc_rtx
 	      && (trial == pc_rtx
-		  || (GET_CODE (trial) == LABEL_REF
+		  || (LABEL_REF_P (trial)
 		      && ! condjump_p (insn))))
 	    {
 	      /* Don't substitute non-local labels, this confuses CFG.  */
-	      if (GET_CODE (trial) == LABEL_REF
+	      if (LABEL_REF_P (trial)
 		  && LABEL_REF_NONLOCAL_P (trial))
 		continue;
 
@@ -5380,8 +5380,8 @@ cse_insn (rtx_insn *insn)
 		       /* Likewise on IA-64, except without the
 			  truncate.  */
 		       || (GET_CODE (XEXP (trial, 0)) == MINUS
-			   && GET_CODE (XEXP (XEXP (trial, 0), 0)) == LABEL_REF
-			   && GET_CODE (XEXP (XEXP (trial, 0), 1)) == LABEL_REF)))
+			   && LABEL_REF_P (XEXP (XEXP (trial, 0), 0))
+			   && LABEL_REF_P (XEXP (XEXP (trial, 0), 1)))))
 	    /* Do nothing for this case.  */
 	    ;
 
@@ -5501,12 +5501,12 @@ cse_insn (rtx_insn *insn)
 	  && REG_P (dest)
 	  && src_const
 	  && !REG_P (src_const)
-	  && !(GET_CODE (src_const) == SUBREG
+	  && !(SUBREG_P (src_const)
 	       && REG_P (SUBREG_REG (src_const)))
 	  && !(GET_CODE (src_const) == CONST
 	       && GET_CODE (XEXP (src_const, 0)) == MINUS
-	       && GET_CODE (XEXP (XEXP (src_const, 0), 0)) == LABEL_REF
-	       && GET_CODE (XEXP (XEXP (src_const, 0), 1)) == LABEL_REF)
+	       && LABEL_REF_P (XEXP (XEXP (src_const, 0), 0))
+	       && LABEL_REF_P (XEXP (XEXP (src_const, 0), 1)))
 	  && !rtx_equal_p (src, src_const))
 	{
 	  /* Make sure that the rtx is not shared.  */
@@ -5522,7 +5522,7 @@ cse_insn (rtx_insn *insn)
       do_not_record = 0;
 
       /* Look within any ZERO_EXTRACT to the MEM or REG within it.  */
-      while (GET_CODE (dest) == SUBREG
+      while (SUBREG_P (dest)
 	     || GET_CODE (dest) == ZERO_EXTRACT
 	     || GET_CODE (dest) == STRICT_LOW_PART)
 	dest = XEXP (dest, 0);
@@ -5599,7 +5599,7 @@ cse_insn (rtx_insn *insn)
 
       /* If this SET is now setting PC to a label, we know it used to
 	 be a conditional or computed branch.  */
-      else if (dest == pc_rtx && GET_CODE (src) == LABEL_REF
+      else if (dest == pc_rtx && LABEL_REF_P (src)
 	       && !LABEL_REF_NONLOCAL_P (src))
 	{
 	  /* We reemit the jump in as many cases as possible just in
@@ -5853,7 +5853,7 @@ cse_insn (rtx_insn *insn)
 	   previous quantity's chain.
 	   Needed for memory if this is a nonvarying address, unless
 	   we have just done an invalidate_memory that covers even those.  */
-	if (REG_P (dest) || GET_CODE (dest) == SUBREG)
+	if (REG_P (dest) || SUBREG_P (dest))
 	  invalidate (dest, VOIDmode);
 	else if (MEM_P (dest))
 	  invalidate (dest, VOIDmode);
@@ -5975,7 +5975,7 @@ cse_insn (rtx_insn *insn)
 	if (GET_CODE (dest) == STRICT_LOW_PART)
 	  dest = SUBREG_REG (XEXP (dest, 0));
 
-	if (REG_P (dest) || GET_CODE (dest) == SUBREG)
+	if (REG_P (dest) || SUBREG_P (dest))
 	  /* Registers must also be inserted into chains for quantities.  */
 	  if (insert_regs (dest, sets[i].src_elt, 1))
 	    {
@@ -5998,7 +5998,7 @@ cse_insn (rtx_insn *insn)
 	if (targetm.const_anchor
 	    && REG_P (dest)
 	    && SCALAR_INT_MODE_P (GET_MODE (dest))
-	    && GET_CODE (sets[i].src_elt->exp) == CONST_INT)
+	    && CONST_INT_P (sets[i].src_elt->exp))
 	  insert_const_anchors (dest, sets[i].src_elt->exp, GET_MODE (dest));
 
 	elt->in_memory = (MEM_P (sets[i].inner_dest)
@@ -6019,7 +6019,7 @@ cse_insn (rtx_insn *insn)
 	   Note the loop below will find SUBREG_REG (DEST) since we have
 	   already entered SRC and DEST of the SET in the table.  */
 
-	if (GET_CODE (dest) == SUBREG
+	if (SUBREG_P (dest)
 	    && (known_equal_after_align_down
 		(GET_MODE_SIZE (GET_MODE (SUBREG_REG (dest))) - 1,
 		 GET_MODE_SIZE (GET_MODE (dest)) - 1,
@@ -6147,7 +6147,7 @@ invalidate_from_clobbers (rtx_insn *insn)
       rtx ref = XEXP (x, 0);
       if (ref)
 	{
-	  if (REG_P (ref) || GET_CODE (ref) == SUBREG
+	  if (REG_P (ref) || SUBREG_P (ref)
 	      || MEM_P (ref))
 	    invalidate (ref, VOIDmode);
 	  else if (GET_CODE (ref) == STRICT_LOW_PART
@@ -6170,7 +6170,7 @@ invalidate_from_clobbers (rtx_insn *insn)
 	  if (GET_CODE (y) == CLOBBER)
 	    {
 	      rtx ref = XEXP (y, 0);
-	      if (REG_P (ref) || GET_CODE (ref) == SUBREG
+	      if (REG_P (ref) || SUBREG_P (ref)
 		  || MEM_P (ref))
 		invalidate (ref, VOIDmode);
 	      else if (GET_CODE (ref) == STRICT_LOW_PART
@@ -6231,7 +6231,7 @@ invalidate_from_sets_and_clobbers (rtx_insn *insn)
 	      rtx clobbered = XEXP (y, 0);
 
 	      if (REG_P (clobbered)
-		  || GET_CODE (clobbered) == SUBREG)
+		  || SUBREG_P (clobbered))
 		invalidate (clobbered, VOIDmode);
 	      else if (GET_CODE (clobbered) == STRICT_LOW_PART
 		       || GET_CODE (clobbered) == ZERO_EXTRACT)
@@ -6582,7 +6582,7 @@ check_for_label_ref (rtx_insn *insn)
   FOR_EACH_SUBRTX (iter, array, PATTERN (insn), ALL)
     {
       const_rtx x = *iter;
-      if (GET_CODE (x) == LABEL_REF
+      if (LABEL_REF_P (x)
 	  && !LABEL_REF_NONLOCAL_P (x)
 	  && (!JUMP_P (insn)
 	      || !label_is_jump_target_p (label_ref_label (x), insn))
diff --git a/gcc/cselib.c b/gcc/cselib.c
index 7b0545e779c..27be48d2154 100644
--- a/gcc/cselib.c
+++ b/gcc/cselib.c
@@ -1474,7 +1474,7 @@ expand_loc (struct elt_loc_list *p, struct expand_value_data *evd,
 	      fprintf (dump_file, "\n");
 	    }
 	  if (GET_CODE (p->loc) == LO_SUM
-	      && GET_CODE (XEXP (p->loc, 1)) == SYMBOL_REF
+	      && SYMBOL_REF_P (XEXP (p->loc, 1))
 	      && p->setting_insn
 	      && (note = find_reg_note (p->setting_insn, REG_EQUAL, NULL_RTX))
 	      && XEXP (note, 0) == XEXP (p->loc, 1))
@@ -1692,7 +1692,7 @@ cselib_expand_value_rtx_1 (rtx orig, struct expand_value_data *evd,
 				     GET_MODE (SUBREG_REG (orig)),
 				     SUBREG_BYTE (orig));
 	if (scopy == NULL
-	    || (GET_CODE (scopy) == SUBREG
+	    || (SUBREG_P (scopy)
 		&& !REG_P (SUBREG_REG (scopy))
 		&& !MEM_P (SUBREG_REG (scopy))))
 	  return NULL;
@@ -2376,7 +2376,7 @@ cselib_invalidate_mem (rtx mem_rtx)
 void
 cselib_invalidate_rtx (rtx dest, const_rtx setter)
 {
-  while (GET_CODE (dest) == SUBREG
+  while (SUBREG_P (dest)
 	 || GET_CODE (dest) == ZERO_EXTRACT
 	 || GET_CODE (dest) == STRICT_LOW_PART)
     dest = XEXP (dest, 0);
diff --git a/gcc/dbxout.c b/gcc/dbxout.c
index 81577dfe5cd..2ecca9a7d81 100644
--- a/gcc/dbxout.c
+++ b/gcc/dbxout.c
@@ -2740,7 +2740,7 @@ dbxout_symbol (tree decl, int local ATTRIBUTE_UNUSED)
       if (context && DECL_FROM_INLINE (decl))
 	break;
       if (!MEM_P (decl_rtl)
-	  || GET_CODE (XEXP (decl_rtl, 0)) != SYMBOL_REF)
+	  || !SYMBOL_REF_P (XEXP (decl_rtl, 0)))
 	break;
 
       if (flag_debug_only_used_symbols)
@@ -3000,11 +3000,11 @@ dbxout_symbol_location (tree decl, tree type, const char *suffix, rtx home)
      If the decl was from an inline function, then its rtl
      is not identically the rtl that was used in this
      particular compilation.  */
-  if (GET_CODE (home) == SUBREG)
+  if (SUBREG_P (home))
     {
       rtx value = home;
 
-      while (GET_CODE (value) == SUBREG)
+      while (SUBREG_P (value))
 	value = SUBREG_REG (value);
       if (REG_P (value))
 	{
@@ -3033,7 +3033,7 @@ dbxout_symbol_location (tree decl, tree type, const char *suffix, rtx home)
      no letter at all, and N_LSYM, for auto variable,
      r and N_RSYM for register variable.  */
 
-  if (MEM_P (home) && GET_CODE (XEXP (home, 0)) == SYMBOL_REF)
+  if (MEM_P (home) && SYMBOL_REF_P (XEXP (home, 0)))
     {
       if (TREE_PUBLIC (decl))
 	{
@@ -3058,13 +3058,13 @@ dbxout_symbol_location (tree decl, tree type, const char *suffix, rtx home)
 	     dumped into a constant pool.  Alternatively, the symbol
 	     in the constant pool might be referenced by a different
 	     symbol.  */
-	  if (GET_CODE (addr) == SYMBOL_REF
+	  if (SYMBOL_REF_P (addr)
 	      && CONSTANT_POOL_ADDRESS_P (addr))
 	    {
 	      bool marked;
 	      rtx tmp = get_pool_constant_mark (addr, &marked);
 
-	      if (GET_CODE (tmp) == SYMBOL_REF)
+	      if (SYMBOL_REF_P (tmp))
 		{
 		  addr = tmp;
 		  if (CONSTANT_POOL_ADDRESS_P (addr))
@@ -3072,7 +3072,7 @@ dbxout_symbol_location (tree decl, tree type, const char *suffix, rtx home)
 		  else
 		    marked = true;
 		}
-	      else if (GET_CODE (tmp) == LABEL_REF)
+	      else if (LABEL_REF_P (tmp))
 		{
 		  addr = tmp;
 		  marked = true;
@@ -3329,17 +3329,17 @@ dbxout_common_check (tree decl, int *value)
     return NULL;
 
   home = DECL_RTL (decl);
-  if (home == NULL_RTX || GET_CODE (home) != MEM)
+  if (home == NULL_RTX || !MEM_P (home))
     return NULL;
 
   sym_addr = dbxout_expand_expr (DECL_VALUE_EXPR (decl));
-  if (sym_addr == NULL_RTX || GET_CODE (sym_addr) != MEM)
+  if (sym_addr == NULL_RTX || !MEM_P (sym_addr))
     return NULL;
 
   sym_addr = XEXP (sym_addr, 0);
   if (GET_CODE (sym_addr) == CONST)
     sym_addr = XEXP (sym_addr, 0);
-  if ((GET_CODE (sym_addr) == SYMBOL_REF || GET_CODE (sym_addr) == PLUS)
+  if ((SYMBOL_REF_P (sym_addr) || GET_CODE (sym_addr) == PLUS)
       && DECL_INITIAL (decl) == 0)
     {
 
diff --git a/gcc/defaults.h b/gcc/defaults.h
index af7ea185f1e..f4bdc7b0247 100644
--- a/gcc/defaults.h
+++ b/gcc/defaults.h
@@ -1170,7 +1170,7 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
 /* For most ports anything that evaluates to a constant symbolic
    or integer value is acceptable as a constant address.  */
 #ifndef CONSTANT_ADDRESS_P
-#define CONSTANT_ADDRESS_P(X)   (CONSTANT_P (X) && GET_CODE (X) != CONST_DOUBLE)
+#define CONSTANT_ADDRESS_P(X)   (CONSTANT_P (X) && !CONST_DOUBLE_P (X))
 #endif
 
 #ifndef MAX_FIXED_MODE_SIZE
diff --git a/gcc/df-core.c b/gcc/df-core.c
index be19aba0f1e..1eae8dba891 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -1955,7 +1955,7 @@ df_find_def (rtx_insn *insn, rtx reg)
 {
   df_ref def;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   gcc_assert (REG_P (reg));
 
@@ -1984,7 +1984,7 @@ df_find_use (rtx_insn *insn, rtx reg)
 {
   df_ref use;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   gcc_assert (REG_P (reg));
 
diff --git a/gcc/df-problems.c b/gcc/df-problems.c
index d32c688510c..bc3f38a94bd 100644
--- a/gcc/df-problems.c
+++ b/gcc/df-problems.c
@@ -2811,7 +2811,7 @@ df_word_lr_mark_ref (df_ref ref, bool is_set, regset live)
   int which_subword = -1;
   bool changed = false;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (orig_reg);
   regno = REGNO (reg);
   reg_mode = GET_MODE (reg);
@@ -2819,7 +2819,7 @@ df_word_lr_mark_ref (df_ref ref, bool is_set, regset live)
       || maybe_ne (GET_MODE_SIZE (reg_mode), 2 * UNITS_PER_WORD))
     return true;
 
-  if (GET_CODE (orig_reg) == SUBREG
+  if (SUBREG_P (orig_reg)
       && read_modify_subreg_p (orig_reg))
     {
       gcc_assert (DF_REF_FLAGS_IS_SET (ref, DF_REF_PARTIAL));
@@ -4002,7 +4002,7 @@ find_memory_stores (rtx x, const_rtx pat ATTRIBUTE_UNUSED,
 		    void *data ATTRIBUTE_UNUSED)
 {
   int *pflags = (int *)data;
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     x = XEXP (x, 0);
   /* Treat stores to SP as stores to memory, this will prevent problems
      when there are references to the stack frame.  */
diff --git a/gcc/df-scan.c b/gcc/df-scan.c
index 03294a8a2c3..c3d52be4e49 100644
--- a/gcc/df-scan.c
+++ b/gcc/df-scan.c
@@ -2474,7 +2474,7 @@ df_ref_create_structure (enum df_ref_class cl,
 			 int ref_flags)
 {
   df_ref this_ref = NULL;
-  unsigned int regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
+  unsigned int regno = REGNO (SUBREG_P (reg) ? SUBREG_REG (reg) : reg);
   struct df_scan_problem_data *problem_data
     = (struct df_scan_problem_data *) df_scan->problem_data;
 
@@ -2561,9 +2561,9 @@ df_ref_record (enum df_ref_class cl,
 {
   unsigned int regno;
 
-  gcc_checking_assert (REG_P (reg) || GET_CODE (reg) == SUBREG);
+  gcc_checking_assert (REG_P (reg) || SUBREG_P (reg));
 
-  regno = REGNO (GET_CODE (reg) == SUBREG ? SUBREG_REG (reg) : reg);
+  regno = REGNO (SUBREG_P (reg) ? SUBREG_REG (reg) : reg);
   if (regno < FIRST_PSEUDO_REGISTER)
     {
       struct df_mw_hardreg *hardreg = NULL;
@@ -2573,7 +2573,7 @@ df_ref_record (enum df_ref_class cl,
       unsigned int endregno;
       df_ref ref;
 
-      if (GET_CODE (reg) == SUBREG)
+      if (SUBREG_P (reg))
 	{
 	  regno += subreg_regno_offset (regno, GET_MODE (SUBREG_REG (reg)),
 					SUBREG_BYTE (reg), GET_MODE (reg));
@@ -2590,7 +2590,7 @@ df_ref_record (enum df_ref_class cl,
 	{
 	  /* Sets to a subreg of a multiword register are partial.
 	     Sets to a non-subreg of a multiword register are not.  */
-	  if (GET_CODE (reg) == SUBREG)
+	  if (SUBREG_P (reg))
 	    ref_flags |= DF_REF_PARTIAL;
 	  ref_flags |= DF_REF_MW_HARDREG;
 
@@ -2673,7 +2673,7 @@ df_def_record_1 (class df_collection_rec *collection_rec,
 	df_ref_record (DF_REF_BASE, collection_rec,
 		       dst, NULL, bb, insn_info, DF_REF_REG_USE, flags);
     }
-  else if (GET_CODE (dst) == SUBREG && REG_P (SUBREG_REG (dst)))
+  else if (SUBREG_P (dst) && REG_P (SUBREG_REG (dst)))
     {
       if (read_modify_subreg_p (dst))
 	flags |= DF_REF_READ_WRITE | DF_REF_PARTIAL;
@@ -2753,7 +2753,7 @@ df_find_hard_reg_defs_1 (rtx dst, HARD_REG_SET *defs)
   /* At this point if we do not have a reg or a subreg, just return.  */
   if (REG_P (dst) && HARD_REGISTER_P (dst))
     SET_HARD_REG_BIT (*defs, REGNO (dst));
-  else if (GET_CODE (dst) == SUBREG
+  else if (SUBREG_P (dst)
 	   && REG_P (SUBREG_REG (dst)) && HARD_REGISTER_P (dst))
     SET_HARD_REG_BIT (*defs, REGNO (SUBREG_REG (dst)));
 }
@@ -2919,7 +2919,7 @@ df_uses_record (class df_collection_rec *collection_rec,
 		 SUBREG.  */
 		dst = XEXP (dst, 0);
 		df_uses_record (collection_rec,
-				(GET_CODE (dst) == SUBREG) ? &SUBREG_REG (dst) : temp,
+				(SUBREG_P (dst)) ? &SUBREG_REG (dst) : temp,
 				DF_REF_REG_USE, bb, insn_info,
 				DF_REF_READ_WRITE | DF_REF_STRICT_LOW_PART);
 	      }
@@ -2930,7 +2930,7 @@ df_uses_record (class df_collection_rec *collection_rec,
 				DF_REF_REG_USE, bb, insn_info, flags);
 		df_uses_record (collection_rec, &XEXP (dst, 2),
 				DF_REF_REG_USE, bb, insn_info, flags);
-                if (GET_CODE (XEXP (dst,0)) == MEM)
+                if (MEM_P (XEXP (dst,0)))
                   df_uses_record (collection_rec, &XEXP (dst, 0),
                                   DF_REF_REG_USE, bb, insn_info,
                                   flags);
diff --git a/gcc/df.h b/gcc/df.h
index 2e3b825065e..039fe9e9b22 100644
--- a/gcc/df.h
+++ b/gcc/df.h
@@ -647,10 +647,10 @@ public:
 /* Macros to access the elements within the ref structure.  */
 
 
-#define DF_REF_REAL_REG(REF) (GET_CODE ((REF)->base.reg) == SUBREG \
+#define DF_REF_REAL_REG(REF) (SUBREG_P ((REF)->base.reg) \
 				? SUBREG_REG ((REF)->base.reg) : ((REF)->base.reg))
 #define DF_REF_REGNO(REF) ((REF)->base.regno)
-#define DF_REF_REAL_LOC(REF) (GET_CODE (*((REF)->regular_ref.loc)) == SUBREG \
+#define DF_REF_REAL_LOC(REF) (SUBREG_P (*((REF)->regular_ref.loc)) \
                                ? &SUBREG_REG (*((REF)->regular_ref.loc)) : ((REF)->regular_ref.loc))
 #define DF_REF_REG(REF) ((REF)->base.reg)
 #define DF_REF_LOC(REF) (DF_REF_CLASS (REF) == DF_REF_REGULAR ? \
diff --git a/gcc/dojump.c b/gcc/dojump.c
index bac37a357a9..040d2c260fb 100644
--- a/gcc/dojump.c
+++ b/gcc/dojump.c
@@ -610,7 +610,7 @@ do_jump (tree exp, rtx_code_label *if_false_label,
       temp = expand_normal (exp);
       do_pending_stack_adjust ();
       /* The RTL optimizers prefer comparisons against pseudos.  */
-      if (GET_CODE (temp) == SUBREG)
+      if (SUBREG_P (temp))
 	{
 	  /* Compare promoted variables in their promoted mode.  */
 	  if (SUBREG_PROMOTED_VAR_P (temp)
diff --git a/gcc/dse.c b/gcc/dse.c
index 8d7358d02b4..b7511a320da 100644
--- a/gcc/dse.c
+++ b/gcc/dse.c
@@ -1056,7 +1056,7 @@ const_or_frame_p (rtx x)
   if (CONSTANT_P (x))
     return true;
 
-  if (GET_CODE (x) == REG)
+  if (REG_P (x))
     {
       /* Note that we have to test for the actual rtx used for the frame
 	 and arg pointers and not just the register number in case we have
@@ -1464,7 +1464,7 @@ record_store (rtx body, bb_info_t bb_info)
       /* No place to keep the value after ra.  */
       && !reload_completed
       && (REG_P (SET_SRC (body))
-	  || GET_CODE (SET_SRC (body)) == SUBREG
+	  || SUBREG_P (SET_SRC (body))
 	  || CONSTANT_P (SET_SRC (body)))
       && !MEM_VOLATILE_P (mem)
       /* Sometimes the store and reload is used for truncation and
@@ -2439,7 +2439,7 @@ scan_insn (bb_info_t bb_info, rtx_insn *insn)
       if (!const_call
 	  && (call = get_call_rtx_from (insn))
 	  && (sym = XEXP (XEXP (call, 0), 0))
-	  && GET_CODE (sym) == SYMBOL_REF
+	  && SYMBOL_REF_P (sym)
 	  && SYMBOL_REF_DECL (sym)
 	  && TREE_CODE (SYMBOL_REF_DECL (sym)) == FUNCTION_DECL
 	  && fndecl_built_in_p (SYMBOL_REF_DECL (sym), BUILT_IN_MEMSET))
diff --git a/gcc/dwarf2asm.c b/gcc/dwarf2asm.c
index 488e54b72ec..fc655272ed8 100644
--- a/gcc/dwarf2asm.c
+++ b/gcc/dwarf2asm.c
@@ -912,7 +912,7 @@ dw2_force_const_mem (rtx x, bool is_public)
   if (! indirect_pool)
     indirect_pool = hash_map<const char *, tree>::create_ggc (64);
 
-  gcc_assert (GET_CODE (x) == SYMBOL_REF);
+  gcc_assert (SYMBOL_REF_P (x));
 
   key = XSTR (x, 0);
   tree *slot = indirect_pool->get (key);
@@ -1078,13 +1078,13 @@ dw2_asm_output_encoded_addr_rtx (int encoding, rtx addr, bool is_public,
 
 #ifdef ASM_OUTPUT_DWARF_DATAREL
 	case DW_EH_PE_datarel:
-	  gcc_assert (GET_CODE (addr) == SYMBOL_REF);
+	  gcc_assert (SYMBOL_REF_P (addr));
 	  ASM_OUTPUT_DWARF_DATAREL (asm_out_file, size, XSTR (addr, 0));
 	  break;
 #endif
 
 	case DW_EH_PE_pcrel:
-	  gcc_assert (GET_CODE (addr) == SYMBOL_REF);
+	  gcc_assert (SYMBOL_REF_P (addr));
 #ifdef ASM_OUTPUT_DWARF_PCREL
 	  ASM_OUTPUT_DWARF_PCREL (asm_out_file, size, XSTR (addr, 0));
 #else
diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c
index aa7fd7eb465..b2b4f6d82b2 100644
--- a/gcc/dwarf2out.c
+++ b/gcc/dwarf2out.c
@@ -4222,7 +4222,7 @@ static inline int
 is_pseudo_reg (const_rtx rtl)
 {
   return ((REG_P (rtl) && REGNO (rtl) >= FIRST_PSEUDO_REGISTER)
-	  || (GET_CODE (rtl) == SUBREG
+	  || (SUBREG_P (rtl)
 	      && REGNO (SUBREG_REG (rtl)) >= FIRST_PSEUDO_REGISTER));
 }
 
@@ -6925,7 +6925,7 @@ attr_checksum (dw_attr_node *at, struct md5_ctx *ctx, int *mark)
 
     case dw_val_class_addr:
       r = AT_addr (at);
-      gcc_assert (GET_CODE (r) == SYMBOL_REF);
+      gcc_assert (SYMBOL_REF_P (r));
       CHECKSUM_STRING (XSTR (r, 0));
       break;
 
@@ -7225,7 +7225,7 @@ attr_checksum_ordered (enum dwarf_tag tag, dw_attr_node *at,
 
     case dw_val_class_addr:
       r = AT_addr (at);
-      gcc_assert (GET_CODE (r) == SYMBOL_REF);
+      gcc_assert (SYMBOL_REF_P (r));
       CHECKSUM_ULEB128 (DW_FORM_string);
       CHECKSUM_STRING (XSTR (r, 0));
       break;
@@ -14503,7 +14503,7 @@ const_ok_for_output_1 (rtx rtl)
 	 rather than DECL_THREAD_LOCAL_P is not just an optimization.  */
       if (flag_checking
 	  && (XVECLEN (rtl, 0) == 0
-	      || GET_CODE (XVECEXP (rtl, 0, 0)) != SYMBOL_REF
+	      || !SYMBOL_REF_P (XVECEXP (rtl, 0, 0))
 	      || SYMBOL_REF_TLS_MODEL (XVECEXP (rtl, 0, 0)) == TLS_MODEL_NONE))
 	inform (current_function_decl
 		? DECL_SOURCE_LOCATION (current_function_decl)
@@ -14619,7 +14619,7 @@ const_ok_for_output_1 (rtx rtl)
 static bool
 const_ok_for_output (rtx rtl)
 {
-  if (GET_CODE (rtl) == SYMBOL_REF)
+  if (SYMBOL_REF_P (rtl))
     return const_ok_for_output_1 (rtl);
 
   if (GET_CODE (rtl) == CONST)
@@ -15129,7 +15129,7 @@ clz_loc_descriptor (rtx rtl, scalar_int_mode mode,
     msb = immed_wide_int_const
       (wi::set_bit_in_zero (GET_MODE_PRECISION (mode) - 1,
 			    GET_MODE_PRECISION (mode)), mode);
-  if (GET_CODE (msb) == CONST_INT && INTVAL (msb) < 0)
+  if (CONST_INT_P (msb) && INTVAL (msb) < 0)
     tmp = new_loc_descr (HOST_BITS_PER_WIDE_INT == 32
 			 ? DW_OP_const4u : HOST_BITS_PER_WIDE_INT == 64
 			 ? DW_OP_const8u : DW_OP_constu, INTVAL (msb), 0);
@@ -15755,7 +15755,7 @@ mem_loc_descriptor (rtx rtl, machine_mode mode,
 	  goto symref;
 	}
 
-      if (GET_CODE (rtl) == SYMBOL_REF
+      if (SYMBOL_REF_P (rtl)
 	  && SYMBOL_REF_TLS_MODEL (rtl) != TLS_MODEL_NONE)
 	{
 	  dw_loc_descr_ref temp;
@@ -16813,7 +16813,7 @@ loc_descriptor (rtx rtl, machine_mode mode,
       if (mode == VOIDmode
 	  || CONST_SCALAR_INT_P (XEXP (rtl, 0))
 	  || CONST_DOUBLE_AS_FLOAT_P (XEXP (rtl, 0))
-	  || GET_CODE (XEXP (rtl, 0)) == CONST_VECTOR)
+	  || CONST_VECTOR_P (XEXP (rtl, 0)))
 	{
 	  loc_result = loc_descriptor (XEXP (rtl, 0), mode, initialized);
 	  break;
@@ -17133,7 +17133,7 @@ dw_sra_loc_expr (tree decl, rtx loc)
 		  || GET_CODE (varloc) == SIGN_EXTEND
 		  || GET_CODE (varloc) == ZERO_EXTEND)
 		varloc = XEXP (varloc, 0);
-	      else if (GET_CODE (varloc) == SUBREG)
+	      else if (SUBREG_P (varloc))
 		varloc = SUBREG_REG (varloc);
 	      else
 		break;
@@ -17453,7 +17453,7 @@ cst_pool_loc_descr (tree loc)
       gcc_assert (!rtl);
       return 0;
     }
-  gcc_assert (GET_CODE (XEXP (rtl, 0)) == SYMBOL_REF);
+  gcc_assert (SYMBOL_REF_P (XEXP (rtl, 0)));
 
   /* TODO: We might get more coverage if we was actually delaying expansion
      of all expressions till end of compilation when constant pools are fully
@@ -18421,7 +18421,7 @@ loc_list_from_tree_1 (tree loc, int want_address,
 	      val &= GET_MODE_MASK (DECL_MODE (loc));
 	    ret = int_loc_descriptor (val);
 	  }
-	else if (GET_CODE (rtl) == CONST_STRING)
+	else if (CONST_STRING_P (rtl))
 	  {
 	    expansion_failed (loc, NULL_RTX, "CONST_STRING");
 	    return 0;
@@ -19687,7 +19687,7 @@ add_const_value_attribute (dw_die_ref die, rtx rtl)
       return false;
 
     case MEM:
-      if (GET_CODE (XEXP (rtl, 0)) == CONST_STRING
+      if (CONST_STRING_P (XEXP (rtl, 0))
 	  && MEM_READONLY_P (rtl)
 	  && GET_MODE (rtl) == BLKmode)
 	{
@@ -20063,7 +20063,7 @@ rtl_for_decl_location (tree decl)
     {
       rtl = make_decl_rtl_for_debug (decl);
       if (!MEM_P (rtl)
-	  || GET_CODE (XEXP (rtl, 0)) != SYMBOL_REF
+	  || !SYMBOL_REF_P (XEXP (rtl, 0))
 	  || SYMBOL_REF_DECL (XEXP (rtl, 0)) != decl)
 	rtl = NULL_RTX;
     }
@@ -20165,7 +20165,7 @@ add_location_or_const_value_attribute (dw_die_ref die, tree decl, bool cache_p)
      the location.  */
 
   rtl = rtl_for_decl_location (decl);
-  if (rtl && (CONSTANT_P (rtl) || GET_CODE (rtl) == CONST_STRING)
+  if (rtl && (CONSTANT_P (rtl) || CONST_STRING_P (rtl))
       && add_const_value_attribute (die, rtl))
     return true;
 
@@ -20186,7 +20186,7 @@ add_location_or_const_value_attribute (dw_die_ref die, tree decl, bool cache_p)
       rtl = NOTE_VAR_LOCATION_LOC (node->loc);
       if (GET_CODE (rtl) == EXPR_LIST)
 	rtl = XEXP (rtl, 0);
-      if ((CONSTANT_P (rtl) || GET_CODE (rtl) == CONST_STRING)
+      if ((CONSTANT_P (rtl) || CONST_STRING_P (rtl))
 	  && add_const_value_attribute (die, rtl))
 	 return true;
     }
@@ -21798,7 +21798,7 @@ decl_start_label (tree decl)
   gcc_assert (MEM_P (x));
 
   x = XEXP (x, 0);
-  gcc_assert (GET_CODE (x) == SYMBOL_REF);
+  gcc_assert (SYMBOL_REF_P (x));
 
   fnname = XSTR (x, 0);
   return fnname;
@@ -23816,7 +23816,7 @@ gen_variable_die (tree decl, tree origin, dw_die_ref context_die)
 	      if (single_element_loc_list_p (loc)
                   && loc->expr->dw_loc_opc == DW_OP_addr
 		  && loc->expr->dw_loc_next == NULL
-		  && GET_CODE (loc->expr->dw_loc_oprnd1.v.val_addr) == SYMBOL_REF)
+		  && SYMBOL_REF_P (loc->expr->dw_loc_oprnd1.v.val_addr))
 		{
 		  rtx x = loc->expr->dw_loc_oprnd1.v.val_addr;
 		  loc->expr->dw_loc_oprnd1.v.val_addr
@@ -27334,7 +27334,7 @@ dwarf2out_var_location (rtx_insn *loc_note)
 	      if (GET_CODE (x) == CALL)
 		x = XEXP (x, 0);
 	      if (!MEM_P (x)
-		  || GET_CODE (XEXP (x, 0)) != SYMBOL_REF
+		  || !SYMBOL_REF_P (XEXP (x, 0))
 		  || !SYMBOL_REF_DECL (XEXP (x, 0))
 		  || (TREE_CODE (SYMBOL_REF_DECL (XEXP (x, 0)))
 		      != FUNCTION_DECL))
@@ -27512,7 +27512,7 @@ create_label:
 	  if (MEM_P (XEXP (x, 0)))
 	    x = XEXP (x, 0);
 	  /* First, look for a memory access to a symbol_ref.  */
-	  if (GET_CODE (XEXP (x, 0)) == SYMBOL_REF
+	  if (SYMBOL_REF_P (XEXP (x, 0))
 	      && SYMBOL_REF_DECL (XEXP (x, 0))
 	      && TREE_CODE (SYMBOL_REF_DECL (XEXP (x, 0))) == FUNCTION_DECL)
 	    ca_loc->symbol_ref = XEXP (x, 0);
@@ -29855,7 +29855,7 @@ resolve_one_addr (rtx *addr)
 {
   rtx rtl = *addr;
 
-  if (GET_CODE (rtl) == CONST_STRING)
+  if (CONST_STRING_P (rtl))
     {
       size_t len = strlen (XSTR (rtl, 0)) + 1;
       tree t = build_string (len, XSTR (rtl, 0));
@@ -29866,7 +29866,7 @@ resolve_one_addr (rtx *addr)
       if (!rtl || !MEM_P (rtl))
 	return false;
       rtl = XEXP (rtl, 0);
-      if (GET_CODE (rtl) == SYMBOL_REF
+      if (SYMBOL_REF_P (rtl)
 	  && SYMBOL_REF_DECL (rtl)
 	  && !TREE_ASM_WRITTEN (SYMBOL_REF_DECL (rtl)))
 	return false;
@@ -29875,7 +29875,7 @@ resolve_one_addr (rtx *addr)
       return true;
     }
 
-  if (GET_CODE (rtl) == SYMBOL_REF
+  if (SYMBOL_REF_P (rtl)
       && SYMBOL_REF_DECL (rtl))
     {
       if (TREE_CONSTANT_POOL_ADDRESS_P (rtl))
@@ -29915,7 +29915,7 @@ string_cst_pool_decl (tree t)
   if (!rtl || !MEM_P (rtl))
     return NULL_RTX;
   rtl = XEXP (rtl, 0);
-  if (GET_CODE (rtl) != SYMBOL_REF
+  if (!SYMBOL_REF_P (rtl)
       || SYMBOL_REF_DECL (rtl) == NULL_TREE)
     return NULL_RTX;
 
@@ -29960,7 +29960,7 @@ optimize_one_addr_into_implicit_ptr (dw_loc_descr_ref loc)
       offset = INTVAL (XEXP (XEXP (rtl, 0), 1));
       rtl = XEXP (XEXP (rtl, 0), 0);
     }
-  if (GET_CODE (rtl) == CONST_STRING)
+  if (CONST_STRING_P (rtl))
     {
       size_t len = strlen (XSTR (rtl, 0)) + 1;
       tree t = build_string (len, XSTR (rtl, 0));
@@ -29972,7 +29972,7 @@ optimize_one_addr_into_implicit_ptr (dw_loc_descr_ref loc)
       if (!rtl)
 	return false;
     }
-  if (GET_CODE (rtl) == SYMBOL_REF && SYMBOL_REF_DECL (rtl))
+  if (SYMBOL_REF_P (rtl) && SYMBOL_REF_DECL (rtl))
     {
       decl = SYMBOL_REF_DECL (rtl);
       if (VAR_P (decl) && !DECL_EXTERNAL (decl))
@@ -30596,7 +30596,7 @@ resolve_addr (dw_die_ref die)
 	      if (l != NULL
 		  && l->dw_loc_next == NULL
 		  && l->dw_loc_opc == DW_OP_addr
-		  && GET_CODE (l->dw_loc_oprnd1.v.val_addr) == SYMBOL_REF
+		  && SYMBOL_REF_P (l->dw_loc_oprnd1.v.val_addr)
 		  && SYMBOL_REF_DECL (l->dw_loc_oprnd1.v.val_addr)
 		  && a->dw_attr == DW_AT_location)
 		{
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index a667cdab94e..13cfd0b00ea 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -1294,7 +1294,7 @@ set_reg_attrs_from_value (rtx reg, rtx x)
   while (GET_CODE (x) == SIGN_EXTEND
 	 || GET_CODE (x) == ZERO_EXTEND
 	 || GET_CODE (x) == TRUNCATE
-	 || (GET_CODE (x) == SUBREG && subreg_lowpart_p (x)))
+	 || (SUBREG_P (x) && subreg_lowpart_p (x)))
     {
 #if defined(POINTERS_EXTEND_UNSIGNED)
       if (((GET_CODE (x) == SIGN_EXTEND && POINTERS_EXTEND_UNSIGNED)
@@ -1377,7 +1377,7 @@ set_reg_attrs_for_decl_rtl (tree t, rtx x)
   if (!t)
     return;
   tree tdecl = t;
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     {
       gcc_assert (subreg_lowpart_p (x));
       x = SUBREG_REG (x);
@@ -1595,8 +1595,8 @@ gen_lowpart_common (machine_mode mode, rtx x)
       else if (GET_MODE_SIZE (int_mode) < GET_MODE_SIZE (int_innermode))
 	return gen_rtx_fmt_e (GET_CODE (x), int_mode, XEXP (x, 0));
     }
-  else if (GET_CODE (x) == SUBREG || REG_P (x)
-	   || GET_CODE (x) == CONCAT || GET_CODE (x) == CONST_VECTOR
+  else if (SUBREG_P (x) || REG_P (x)
+	   || GET_CODE (x) == CONCAT || CONST_VECTOR_P (x)
 	   || CONST_DOUBLE_AS_FLOAT_P (x) || CONST_SCALAR_INT_P (x)
 	   || CONST_POLY_INT_P (x))
     return lowpart_subreg (mode, x, innermode);
@@ -1690,7 +1690,7 @@ subreg_size_highpart_offset (poly_uint64 outer_bytes, poly_uint64 inner_bytes)
 int
 subreg_lowpart_p (const_rtx x)
 {
-  if (GET_CODE (x) != SUBREG)
+  if (!SUBREG_P (x))
     return 1;
   else if (GET_MODE (SUBREG_REG (x)) == VOIDmode)
     return 0;
@@ -3348,7 +3348,7 @@ make_safe_from (rtx x, rtx other)
   if ((MEM_P (other)
        && ! CONSTANT_P (x)
        && !REG_P (x)
-       && GET_CODE (x) != SUBREG)
+       && !SUBREG_P (x))
       || (REG_P (other)
 	  && (REGNO (other) < FIRST_PSEUDO_REGISTER
 	      || reg_mentioned_p (other, x))))
@@ -5416,7 +5416,7 @@ set_for_reg_notes (rtx insn)
     reg = XEXP (reg, 0);
 
   /* Check that we have a register.  */
-  if (!(REG_P (reg) || GET_CODE (reg) == SUBREG))
+  if (!(REG_P (reg) || SUBREG_P (reg)))
     return NULL_RTX;
 
   return pat;
@@ -6004,7 +6004,7 @@ gen_vec_duplicate (machine_mode mode, rtx x)
 
 /* A subroutine of const_vec_series_p that handles the case in which:
 
-     (GET_CODE (X) == CONST_VECTOR
+     (CONST_VECTOR_P (X)
       && CONST_VECTOR_NPATTERNS (X) == 1
       && !CONST_VECTOR_DUPLICATE_P (X))
 
diff --git a/gcc/explow.c b/gcc/explow.c
index 7eb854bca4a..441847d02ba 100644
--- a/gcc/explow.c
+++ b/gcc/explow.c
@@ -117,12 +117,12 @@ plus_constant (machine_mode mode, rtx x, poly_int64 c, bool inplace)
       /* If this is a reference to the constant pool, try replacing it with
 	 a reference to a new constant.  If the resulting address isn't
 	 valid, don't return it because we have no way to validize it.  */
-      if (GET_CODE (XEXP (x, 0)) == SYMBOL_REF
+      if (SYMBOL_REF_P (XEXP (x, 0))
 	  && CONSTANT_POOL_ADDRESS_P (XEXP (x, 0)))
 	{
 	  rtx cst = get_pool_constant (XEXP (x, 0));
 
-	  if (GET_CODE (cst) == CONST_VECTOR
+	  if (CONST_VECTOR_P (cst)
 	      && GET_MODE_INNER (GET_MODE (cst)) == mode)
 	    {
 	      cst = gen_lowpart (mode, cst);
@@ -197,7 +197,7 @@ plus_constant (machine_mode mode, rtx x, poly_int64 c, bool inplace)
   if (maybe_ne (c, 0))
     x = gen_rtx_PLUS (mode, x, gen_int_mode (c, mode));
 
-  if (GET_CODE (x) == SYMBOL_REF || GET_CODE (x) == LABEL_REF)
+  if (SYMBOL_REF_P (x) || LABEL_REF_P (x))
     return x;
   else if (all_constant)
     return gen_rtx_CONST (mode, x);
@@ -558,7 +558,7 @@ use_anchored_address (rtx x)
     }
 
   /* Check whether BASE is suitable for anchors.  */
-  if (GET_CODE (base) != SYMBOL_REF
+  if (!SYMBOL_REF_P (base)
       || !SYMBOL_REF_HAS_BLOCK_INFO_P (base)
       || SYMBOL_REF_ANCHOR_P (base)
       || SYMBOL_REF_BLOCK (base) == NULL
@@ -680,17 +680,17 @@ force_reg (machine_mode mode, rtx x)
      known alignment of that pointer.  */
   {
     unsigned align = 0;
-    if (GET_CODE (x) == SYMBOL_REF)
+    if (SYMBOL_REF_P (x))
       {
         align = BITS_PER_UNIT;
 	if (SYMBOL_REF_DECL (x) && DECL_P (SYMBOL_REF_DECL (x)))
 	  align = DECL_ALIGN (SYMBOL_REF_DECL (x));
       }
-    else if (GET_CODE (x) == LABEL_REF)
+    else if (LABEL_REF_P (x))
       align = BITS_PER_UNIT;
     else if (GET_CODE (x) == CONST
 	     && GET_CODE (XEXP (x, 0)) == PLUS
-	     && GET_CODE (XEXP (XEXP (x, 0), 0)) == SYMBOL_REF
+	     && SYMBOL_REF_P (XEXP (XEXP (x, 0), 0))
 	     && CONST_INT_P (XEXP (XEXP (x, 0), 1)))
       {
 	rtx s = XEXP (XEXP (x, 0), 0);
@@ -2178,7 +2178,7 @@ anti_adjust_stack_and_probe (rtx size, bool adjust_back)
       if (temp != const0_rtx)
 	{
 	  /* Manual CSE if the difference is not known at compile-time.  */
-	  if (GET_CODE (temp) != CONST_INT)
+	  if (!CONST_INT_P (temp))
 	    temp = gen_rtx_MINUS (Pmode, size, rounded_size_op);
 	  anti_adjust_stack (temp);
 	  emit_stack_probe (stack_pointer_rtx);
diff --git a/gcc/expmed.c b/gcc/expmed.c
index c582f3a1e62..25d19a05794 100644
--- a/gcc/expmed.c
+++ b/gcc/expmed.c
@@ -622,7 +622,7 @@ store_bit_field_using_insv (const extraction_insn *insv, rtx op0,
 
       /* If xop0 is a register, we need it in OP_MODE
 	 to make it acceptable to the format of insv.  */
-      if (GET_CODE (xop0) == SUBREG)
+      if (SUBREG_P (xop0))
 	/* We can't just change the mode, because this might clobber op0,
 	   and we will need the original value of op0 if insv fails.  */
 	xop0 = gen_rtx_SUBREG (op_mode, SUBREG_REG (xop0), SUBREG_BYTE (xop0));
@@ -635,7 +635,7 @@ store_bit_field_using_insv (const extraction_insn *insv, rtx op0,
      truncate the result to the original destination.  Note that we can't
      just truncate the paradoxical subreg as (truncate:N (subreg:W (reg:N
      X) 0)) is (reg:N X).  */
-  if (GET_CODE (xop0) == SUBREG
+  if (SUBREG_P (xop0)
       && REG_P (SUBREG_REG (xop0))
       && !TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (SUBREG_REG (xop0)),
 					 op_mode))
@@ -735,7 +735,7 @@ store_bit_field_1 (rtx str_rtx, poly_uint64 bitsize, poly_uint64 bitnum,
 {
   rtx op0 = str_rtx;
 
-  while (GET_CODE (op0) == SUBREG)
+  while (SUBREG_P (op0))
     {
       bitnum += subreg_memory_offset (op0) * BITS_PER_UNIT;
       op0 = SUBREG_REG (op0);
@@ -875,7 +875,7 @@ store_integral_bit_field (rtx op0, opt_scalar_int_mode op0_mode,
       rtx arg0 = op0;
       unsigned HOST_WIDE_INT subreg_off;
 
-      if (GET_CODE (arg0) == SUBREG)
+      if (SUBREG_P (arg0))
 	{
 	  /* Else we've got some float mode source being extracted into
 	     a different float mode destination -- this combination of
@@ -1330,7 +1330,7 @@ store_split_bit_field (rtx op0, opt_scalar_int_mode op0_mode,
 
   /* Make sure UNIT isn't larger than BITS_PER_WORD, we can only handle that
      much at a time.  */
-  if (REG_P (op0) || GET_CODE (op0) == SUBREG)
+  if (REG_P (op0) || SUBREG_P (op0))
     unit = BITS_PER_WORD;
   else
     unit = MIN (MEM_ALIGN (op0), BITS_PER_WORD);
@@ -1375,7 +1375,7 @@ store_split_bit_field (rtx op0, opt_scalar_int_mode op0_mode,
 	  && unit > BITS_PER_UNIT
 	  && maybe_gt (bitpos + bitsdone - thispos + unit, bitregion_end + 1)
 	  && !REG_P (op0)
-	  && (GET_CODE (op0) != SUBREG || !REG_P (SUBREG_REG (op0))))
+	  && (!SUBREG_P (op0) || !REG_P (SUBREG_REG (op0))))
 	{
 	  unit = unit / 2;
 	  continue;
@@ -1520,7 +1520,7 @@ extract_bit_field_using_extv (const extraction_insn *extv, rtx op0,
 
       /* If op0 is a register, we need it in EXT_MODE to make it
 	 acceptable to the format of ext(z)v.  */
-      if (GET_CODE (op0) == SUBREG && op0_mode.require () != ext_mode)
+      if (SUBREG_P (op0) && op0_mode.require () != ext_mode)
 	return NULL_RTX;
       if (REG_P (op0) && op0_mode.require () != ext_mode)
 	op0 = gen_lowpart_SUBREG (ext_mode, op0);
@@ -1604,7 +1604,7 @@ extract_bit_field_1 (rtx str_rtx, poly_uint64 bitsize, poly_uint64 bitnum,
   if (tmode == VOIDmode)
     tmode = mode;
 
-  while (GET_CODE (op0) == SUBREG)
+  while (SUBREG_P (op0))
     {
       bitnum += SUBREG_BYTE (op0) * BITS_PER_UNIT;
       op0 = SUBREG_REG (op0);
@@ -1764,7 +1764,7 @@ extract_bit_field_1 (rtx str_rtx, poly_uint64 bitsize, poly_uint64 bitnum,
 
 	  /* If we got a SUBREG, force it into a register since we
 	     aren't going to be able to do another SUBREG on it.  */
-	  if (GET_CODE (op0) == SUBREG)
+	  if (SUBREG_P (op0))
 	    op0 = force_reg (imode, op0);
 	}
       else
@@ -2250,7 +2250,7 @@ extract_split_bit_field (rtx op0, opt_scalar_int_mode op0_mode,
 
   /* Make sure UNIT isn't larger than BITS_PER_WORD, we can only handle that
      much at a time.  */
-  if (REG_P (op0) || GET_CODE (op0) == SUBREG)
+  if (REG_P (op0) || SUBREG_P (op0))
     unit = BITS_PER_WORD;
   else
     unit = MIN (MEM_ALIGN (op0), BITS_PER_WORD);
@@ -2477,7 +2477,7 @@ expand_shift_1 (enum tree_code code, machine_mode mode, rtx shifted,
 	op1 = gen_int_shift_amount (mode,
 				    (unsigned HOST_WIDE_INT) INTVAL (op1)
 				    % GET_MODE_BITSIZE (scalar_mode));
-      else if (GET_CODE (op1) == SUBREG
+      else if (SUBREG_P (op1)
 	       && subreg_lowpart_p (op1)
 	       && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1)))
 	       && SCALAR_INT_MODE_P (GET_MODE (op1)))
@@ -3349,7 +3349,7 @@ expand_mult_const (machine_mode mode, rtx op0, HOST_WIDE_INT val,
 	     we've set the inner register and must properly indicate that.  */
 	  tem = op0, nmode = mode;
 	  accum_inner = accum;
-	  if (GET_CODE (accum) == SUBREG)
+	  if (SUBREG_P (accum))
 	    {
 	      accum_inner = SUBREG_REG (accum);
 	      nmode = GET_MODE (accum_inner);
diff --git a/gcc/expr.c b/gcc/expr.c
index 20e3f9ce337..e22b02b8868 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -227,7 +227,7 @@ convert_move (rtx to, rtx from, int unsignedp)
      TO here.  */
 
   scalar_int_mode to_int_mode;
-  if (GET_CODE (from) == SUBREG
+  if (SUBREG_P (from)
       && SUBREG_PROMOTED_VAR_P (from)
       && is_a <scalar_int_mode> (to_mode, &to_int_mode)
       && (GET_MODE_PRECISION (subreg_promoted_mode (from))
@@ -238,7 +238,7 @@ convert_move (rtx to, rtx from, int unsignedp)
       from_mode = to_int_mode;
     }
 
-  gcc_assert (GET_CODE (to) != SUBREG || !SUBREG_PROMOTED_VAR_P (to));
+  gcc_assert (!SUBREG_P (to) || !SUBREG_PROMOTED_VAR_P (to));
 
   if (to_mode == from_mode
       || (from_mode == VOIDmode && CONSTANT_P (from)))
@@ -437,7 +437,7 @@ convert_mode_scalar (rtx to, rtx from, int unsignedp)
 	     so that we always generate the same set of insns for
 	     better cse'ing; if an intermediate assignment occurred,
 	     we won't be doing the operation directly on the SUBREG.  */
-	  if (optimize > 0 && GET_CODE (from) == SUBREG)
+	  if (optimize > 0 && SUBREG_P (from))
 	    from = force_reg (from_mode, from);
 	  emit_unop_insn (code, to, from, equiv_code);
 	  return;
@@ -519,7 +519,7 @@ convert_mode_scalar (rtx to, rtx from, int unsignedp)
 	     && ! mode_dependent_address_p (XEXP (from, 0),
 					    MEM_ADDR_SPACE (from)))
 	    || REG_P (from)
-	    || GET_CODE (from) == SUBREG))
+	    || SUBREG_P (from)))
 	from = force_reg (from_mode, from);
       convert_move (to, gen_lowpart (word_mode, from), 0);
       return;
@@ -538,7 +538,7 @@ convert_mode_scalar (rtx to, rtx from, int unsignedp)
 	     && ! mode_dependent_address_p (XEXP (from, 0),
 					    MEM_ADDR_SPACE (from)))
 	    || REG_P (from)
-	    || GET_CODE (from) == SUBREG))
+	    || SUBREG_P (from)))
 	from = force_reg (from_mode, from);
       if (REG_P (from) && REGNO (from) < FIRST_PSEUDO_REGISTER
 	  && !targetm.hard_regno_mode_ok (REGNO (from), to_mode))
@@ -656,7 +656,7 @@ convert_modes (machine_mode mode, machine_mode oldmode, rtx x, int unsignedp)
   /* If FROM is a SUBREG that indicates that we have already done at least
      the required extension, strip it.  */
 
-  if (GET_CODE (x) == SUBREG
+  if (SUBREG_P (x)
       && SUBREG_PROMOTED_VAR_P (x)
       && is_a <scalar_int_mode> (mode, &int_mode)
       && (GET_MODE_PRECISION (subreg_promoted_mode (x))
@@ -1659,7 +1659,7 @@ rtx
 emit_block_move (rtx x, rtx y, rtx size, enum block_op_methods method)
 {
   unsigned HOST_WIDE_INT max, min = 0;
-  if (GET_CODE (size) == CONST_INT)
+  if (CONST_INT_P (size))
     min = max = UINTVAL (size);
   else
     max = GET_MODE_MASK (GET_MODE (size));
@@ -3052,7 +3052,7 @@ rtx
 clear_storage (rtx object, rtx size, enum block_op_methods method)
 {
   unsigned HOST_WIDE_INT max, min = 0;
-  if (GET_CODE (size) == CONST_INT)
+  if (CONST_INT_P (size))
     min = max = UINTVAL (size);
   else
     max = GET_MODE_MASK (GET_MODE (size));
@@ -3255,7 +3255,7 @@ read_complex_part (rtx cplx, bool imag_p)
   ibitsize = GET_MODE_BITSIZE (imode);
 
   /* Special case reads from complex constants that got spilled to memory.  */
-  if (MEM_P (cplx) && GET_CODE (XEXP (cplx, 0)) == SYMBOL_REF)
+  if (MEM_P (cplx) && SYMBOL_REF_P (XEXP (cplx, 0)))
     {
       tree decl = SYMBOL_REF_DECL (XEXP (cplx, 0));
       if (decl && TREE_CODE (decl) == COMPLEX_CST)
@@ -3587,7 +3587,7 @@ emit_move_ccmode (machine_mode mode, rtx x, rtx y)
 static bool
 undefined_operand_subword_p (const_rtx op, int i)
 {
-  if (GET_CODE (op) != SUBREG)
+  if (!SUBREG_P (op))
     return false;
   machine_mode innermostmode = GET_MODE (SUBREG_REG (op));
   poly_int64 offset = i * UNITS_PER_WORD + subreg_memory_offset (op);
@@ -3656,7 +3656,7 @@ emit_move_multi_word (machine_mode mode, rtx x, rtx y)
 
       gcc_assert (xpart && ypart);
 
-      need_clobber |= (GET_CODE (xpart) == SUBREG);
+      need_clobber |= (SUBREG_P (xpart));
 
       last_insn = emit_move_insn (xpart, ypart);
     }
@@ -4759,7 +4759,7 @@ optimize_bitfield_assignment_op (poly_uint64 pbitsize,
       offset1 = (offset1 - bitpos) / BITS_PER_UNIT;
       str_rtx = adjust_address (str_rtx, str_mode, offset1);
     }
-  else if (!REG_P (str_rtx) && GET_CODE (str_rtx) != SUBREG)
+  else if (!REG_P (str_rtx) && !SUBREG_P (str_rtx))
     return false;
 
   /* If the bit field covers the whole REG/MEM, store_field
@@ -5554,7 +5554,7 @@ store_expr (tree exp, rtx target, int call_param_p,
 
       return NULL_RTX;
     }
-  else if (GET_CODE (target) == SUBREG && SUBREG_PROMOTED_VAR_P (target))
+  else if (SUBREG_P (target) && SUBREG_PROMOTED_VAR_P (target))
     /* If this is a scalar in a register that is stored in a wider mode
        than the declared mode, compute the result into its declared mode
        and then convert to the wider mode.  Our value is the computed
@@ -6953,7 +6953,7 @@ store_field (rtx target, poly_int64 bitsize, poly_int64 bitpos,
 	  && GET_MODE_CLASS (mode) != MODE_COMPLEX_INT
 	  && GET_MODE_CLASS (mode) != MODE_COMPLEX_FLOAT)
       || REG_P (target)
-      || GET_CODE (target) == SUBREG
+      || SUBREG_P (target)
       /* If the field isn't aligned enough to store as an ordinary memref,
 	 store it as a bit field.  */
       || (mode != BLKmode
@@ -7448,8 +7448,8 @@ force_operand (rtx value, rtx target)
   /* Check for a PIC address load.  */
   if ((code == PLUS || code == MINUS)
       && XEXP (value, 0) == pic_offset_table_rtx
-      && (GET_CODE (XEXP (value, 1)) == SYMBOL_REF
-	  || GET_CODE (XEXP (value, 1)) == LABEL_REF
+      && (SYMBOL_REF_P (XEXP (value, 1))
+	  || LABEL_REF_P (XEXP (value, 1))
 	  || GET_CODE (XEXP (value, 1)) == CONST))
     {
       if (!subtarget)
@@ -7605,7 +7605,7 @@ safe_from_p (const_rtx x, tree exp, int top_p)
 
   /* If this is a subreg of a hard register, declare it unsafe, otherwise,
      find the underlying pseudo.  */
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     {
       x = SUBREG_REG (x);
       if (REG_P (x) && REGNO (x) < FIRST_PSEUDO_REGISTER)
@@ -7749,7 +7749,7 @@ safe_from_p (const_rtx x, tree exp, int top_p)
      with it.  */
   if (exp_rtl)
     {
-      if (GET_CODE (exp_rtl) == SUBREG)
+      if (SUBREG_P (exp_rtl))
 	{
 	  exp_rtl = SUBREG_REG (exp_rtl);
 	  if (REG_P (exp_rtl)
@@ -8518,7 +8518,7 @@ expand_expr_real_2 (sepops ops, rtx target, machine_mode tmode,
 	     a promoted SUBREG, clear that indication since we now
 	     have to do the proper extension.  */
 	  if (TYPE_UNSIGNED (TREE_TYPE (treeop0)) != unsignedp
-	      && GET_CODE (op0) == SUBREG)
+	      && SUBREG_P (op0))
 	    SUBREG_PROMOTED_VAR_P (op0) = 0;
 
 	  return REDUCE_BIT_FIELD (op0);
@@ -10678,7 +10678,7 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
 		    for (int i = 0; i < 2; i++)
 		      {
 			rtx op = read_complex_part (op0, i != 0);
-			if (GET_CODE (op) == SUBREG)
+			if (SUBREG_P (op))
 			  op = force_reg (GET_MODE (op), op);
 			rtx temp = gen_lowpart_common (GET_MODE_INNER (mode1),
 						       op);
@@ -10803,7 +10803,7 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
 	   (which we know to be the width of a basic mode), then
 	   storing into memory, and changing the mode to BLKmode.  */
 	if (mode1 == VOIDmode
-	    || REG_P (op0) || GET_CODE (op0) == SUBREG
+	    || REG_P (op0) || SUBREG_P (op0)
 	    || (mode1 != BLKmode && ! direct_load[(int) mode1]
 		&& GET_MODE_CLASS (mode) != MODE_COMPLEX_INT
 		&& GET_MODE_CLASS (mode) != MODE_COMPLEX_FLOAT
@@ -11114,7 +11114,7 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
 			    GET_MODE_PRECISION (GET_MODE (op0)))
 	       && !COMPLEX_MODE_P (GET_MODE (op0)))
 	{
-	  if (GET_CODE (op0) == SUBREG)
+	  if (SUBREG_P (op0))
 	    op0 = force_reg (GET_MODE (op0), op0);
 	  temp = gen_lowpart_common (mode, op0);
 	  if (temp)
@@ -12271,7 +12271,7 @@ do_tablejump (rtx index, machine_mode mode, rtx range, rtx table_label,
 	 sign-extended subreg, and RANGE does not have the sign bit set, then
 	 we have a value that is valid for both sign and zero extension.  In
 	 this case, we get better code if we sign extend.  */
-      if (GET_CODE (index) == SUBREG
+      if (SUBREG_P (index)
 	  && SUBREG_PROMOTED_VAR_P (index)
 	  && SUBREG_PROMOTED_SIGNED_P (index)
 	  && ((width = GET_MODE_PRECISION (as_a <scalar_int_mode> (mode)))
diff --git a/gcc/final.c b/gcc/final.c
index fefc4874b24..ec396230a3a 100644
--- a/gcc/final.c
+++ b/gcc/final.c
@@ -1611,7 +1611,7 @@ get_some_local_dynamic_name ()
       FOR_EACH_SUBRTX (iter, array, PATTERN (insn), ALL)
 	{
 	  const_rtx x = *iter;
-	  if (GET_CODE (x) == SYMBOL_REF)
+	  if (SYMBOL_REF_P (x))
 	    {
 	      if (SYMBOL_REF_TLS_MODEL (x) == TLS_MODEL_LOCAL_DYNAMIC)
 		return some_local_dynamic_name = XSTR (x, 0);
@@ -2798,17 +2798,17 @@ final_scan_insn_1 (rtx_insn *insn, FILE *file, int optimize_p ATTRIBUTE_UNUSED,
 		&& insn != last_ignored_compare)
 	      {
 		rtx src1, src2;
-		if (GET_CODE (SET_SRC (set)) == SUBREG)
+		if (SUBREG_P (SET_SRC (set)))
 		  SET_SRC (set) = alter_subreg (&SET_SRC (set), true);
 
 		src1 = SET_SRC (set);
 		src2 = NULL_RTX;
 		if (GET_CODE (SET_SRC (set)) == COMPARE)
 		  {
-		    if (GET_CODE (XEXP (SET_SRC (set), 0)) == SUBREG)
+		    if (SUBREG_P (XEXP (SET_SRC (set), 0)))
 		      XEXP (SET_SRC (set), 0)
 			= alter_subreg (&XEXP (SET_SRC (set), 0), true);
-		    if (GET_CODE (XEXP (SET_SRC (set), 1)) == SUBREG)
+		    if (SUBREG_P (XEXP (SET_SRC (set), 1)))
 		      XEXP (SET_SRC (set), 1)
 			= alter_subreg (&XEXP (SET_SRC (set), 1), true);
 		    if (XEXP (SET_SRC (set), 1)
@@ -3093,7 +3093,7 @@ final_scan_insn_1 (rtx_insn *insn, FILE *file, int optimize_p ATTRIBUTE_UNUSED,
 	  {
 	    rtx x = call_from_call_insn (call_insn);
 	    x = XEXP (x, 0);
-	    if (x && MEM_P (x) && GET_CODE (XEXP (x, 0)) == SYMBOL_REF)
+	    if (x && MEM_P (x) && SYMBOL_REF_P (XEXP (x, 0)))
 	      {
 		tree t;
 		x = XEXP (x, 0);
@@ -3318,7 +3318,7 @@ cleanup_subreg_operands (rtx_insn *insn)
 	 already if we are inside a match_operator expression that
 	 matches the else clause.  Instead we test the underlying
 	 expression directly.  */
-      if (GET_CODE (*recog_data.operand_loc[i]) == SUBREG)
+      if (SUBREG_P (*recog_data.operand_loc[i]))
 	{
 	  recog_data.operand[i] = alter_subreg (recog_data.operand_loc[i], true);
 	  changed = true;
@@ -3331,7 +3331,7 @@ cleanup_subreg_operands (rtx_insn *insn)
 
   for (i = 0; i < recog_data.n_dups; i++)
     {
-      if (GET_CODE (*recog_data.dup_loc[i]) == SUBREG)
+      if (SUBREG_P (*recog_data.dup_loc[i]))
 	{
 	  *recog_data.dup_loc[i] = alter_subreg (recog_data.dup_loc[i], true);
 	  changed = true;
@@ -4003,7 +4003,7 @@ output_asm_label (rtx x)
 {
   char buf[256];
 
-  if (GET_CODE (x) == LABEL_REF)
+  if (LABEL_REF_P (x))
     x = label_ref_label (x);
   if (LABEL_P (x)
       || (NOTE_P (x)
@@ -4024,7 +4024,7 @@ mark_symbol_refs_as_used (rtx x)
   FOR_EACH_SUBRTX (iter, array, x, ALL)
     {
       const_rtx x = *iter;
-      if (GET_CODE (x) == SYMBOL_REF)
+      if (SYMBOL_REF_P (x))
 	if (tree t = SYMBOL_REF_DECL (x))
 	  assemble_external (t);
     }
@@ -4042,7 +4042,7 @@ mark_symbol_refs_as_used (rtx x)
 void
 output_operand (rtx x, int code ATTRIBUTE_UNUSED)
 {
-  if (x && GET_CODE (x) == SUBREG)
+  if (x && SUBREG_P (x))
     x = alter_subreg (&x, true);
 
   /* X must not be a pseudo reg.  */
@@ -4183,7 +4183,7 @@ output_addr_const (FILE *file, rtx x)
       fprintf (file, "-");
       if ((CONST_INT_P (XEXP (x, 1)) && INTVAL (XEXP (x, 1)) >= 0)
 	  || GET_CODE (XEXP (x, 1)) == PC
-	  || GET_CODE (XEXP (x, 1)) == SYMBOL_REF)
+	  || SYMBOL_REF_P (XEXP (x, 1)))
 	output_addr_const (file, XEXP (x, 1));
       else
 	{
diff --git a/gcc/function.c b/gcc/function.c
index 2a0061cad35..bd9672e3e44 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -3210,7 +3210,7 @@ assign_parm_setup_reg (struct assign_parm_data_all *all, tree parm,
 	  /* If op1 is a hard register that is likely spilled, first
 	     force it into a pseudo, otherwise combiner might extend
 	     its lifetime too much.  */
-	  if (GET_CODE (t) == SUBREG)
+	  if (SUBREG_P (t))
 	    t = SUBREG_REG (t);
 	  if (REG_P (t)
 	      && HARD_REGISTER_P (t)
diff --git a/gcc/fwprop.c b/gcc/fwprop.c
index 137864cb61b..0b16dd2a9af 100644
--- a/gcc/fwprop.c
+++ b/gcc/fwprop.c
@@ -738,7 +738,7 @@ propagate_rtx (rtx x, machine_mode mode, rtx old_rtx, rtx new_rtx,
   flags = 0;
   if (REG_P (new_rtx)
       || CONSTANT_P (new_rtx)
-      || (GET_CODE (new_rtx) == SUBREG
+      || (SUBREG_P (new_rtx)
 	  && REG_P (SUBREG_REG (new_rtx))
 	  && !paradoxical_subreg_p (new_rtx)))
     flags |= PR_CAN_APPEAR;
@@ -1146,7 +1146,7 @@ free_load_extend (rtx src, rtx_insn *insn)
       rtx patt = PATTERN (DF_REF_INSN (def));
 
       if (GET_CODE (patt) == SET
-	  && GET_CODE (SET_SRC (patt)) == MEM
+	  && MEM_P (SET_SRC (patt))
 	  && rtx_equal_p (SET_DEST (patt), reg))
 	return true;
     }
@@ -1165,7 +1165,7 @@ forward_propagate_subreg (df_ref use, rtx_insn *def_insn, rtx def_set)
 
   /* Only consider subregs... */
   machine_mode use_mode = GET_MODE (use_reg);
-  if (GET_CODE (use_reg) != SUBREG
+  if (!SUBREG_P (use_reg)
       || !REG_P (SET_DEST (def_set)))
     return false;
 
@@ -1178,7 +1178,7 @@ forward_propagate_subreg (df_ref use, rtx_insn *def_insn, rtx def_set)
 	 these SUBREGs just say how to treat the register.  */
       use_insn = DF_REF_INSN (use);
       src = SET_SRC (def_set);
-      if (GET_CODE (src) == SUBREG
+      if (SUBREG_P (src)
 	  && REG_P (SUBREG_REG (src))
 	  && REGNO (SUBREG_REG (src)) >= FIRST_PSEUDO_REGISTER
 	  && GET_MODE (SUBREG_REG (src)) == use_mode
@@ -1327,17 +1327,17 @@ forward_propagate_and_simplify (df_ref use, rtx_insn *def_insn, rtx def_set)
 
   /* If def and use are subreg, check if they match.  */
   reg = DF_REF_REG (use);
-  if (GET_CODE (reg) == SUBREG && GET_CODE (SET_DEST (def_set)) == SUBREG)
+  if (SUBREG_P (reg) && SUBREG_P (SET_DEST (def_set)))
     {
       if (maybe_ne (SUBREG_BYTE (SET_DEST (def_set)), SUBREG_BYTE (reg)))
 	return false;
     }
   /* Check if the def had a subreg, but the use has the whole reg.  */
-  else if (REG_P (reg) && GET_CODE (SET_DEST (def_set)) == SUBREG)
+  else if (REG_P (reg) && SUBREG_P (SET_DEST (def_set)))
     return false;
   /* Check if the use has a subreg, but the def had the whole reg.  Unlike the
      previous case, the optimization is possible and often useful indeed.  */
-  else if (GET_CODE (reg) == SUBREG && REG_P (SET_DEST (def_set)))
+  else if (SUBREG_P (reg) && REG_P (SET_DEST (def_set)))
     reg = SUBREG_REG (reg);
 
   /* Make sure that we can treat REG as having the same mode as the
@@ -1406,7 +1406,7 @@ forward_propagate_and_simplify (df_ref use, rtx_insn *def_insn, rtx def_set)
       set_reg_equal = (note == NULL_RTX
 		       && REG_P (SET_DEST (use_set))
 		       && !REG_P (src)
-		       && !(GET_CODE (src) == SUBREG
+		       && !(SUBREG_P (src)
 			    && REG_P (SUBREG_REG (src)))
 		       && !reg_mentioned_p (SET_DEST (use_set),
 					    SET_SRC (use_set))
diff --git a/gcc/gcse-common.c b/gcc/gcse-common.c
index e6e4b642b58..0e40adb44af 100644
--- a/gcc/gcse-common.c
+++ b/gcc/gcse-common.c
@@ -40,7 +40,7 @@ canon_list_insert (rtx dest, const_rtx x ATTRIBUTE_UNUSED, void *data)
   int bb;
   modify_pair pair;
 
-  while (GET_CODE (dest) == SUBREG
+  while (SUBREG_P (dest)
       || GET_CODE (dest) == ZERO_EXTRACT
       || GET_CODE (dest) == STRICT_LOW_PART)
     dest = XEXP (dest, 0);
diff --git a/gcc/gcse.c b/gcc/gcse.c
index ff2771bdc04..357f8a2ddb2 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -984,7 +984,7 @@ mems_conflict_for_gcse_p (rtx dest, const_rtx setter ATTRIBUTE_UNUSED,
 {
   struct mem_conflict_info *mci = (struct mem_conflict_info *) data;
 
-  while (GET_CODE (dest) == SUBREG
+  while (SUBREG_P (dest)
 	 || GET_CODE (dest) == ZERO_EXTRACT
 	 || GET_CODE (dest) == STRICT_LOW_PART)
     dest = XEXP (dest, 0);
@@ -1478,7 +1478,7 @@ record_last_set_info (rtx dest, const_rtx setter ATTRIBUTE_UNUSED, void *data)
 {
   rtx_insn *last_set_insn = (rtx_insn *) data;
 
-  if (GET_CODE (dest) == SUBREG)
+  if (SUBREG_P (dest))
     dest = SUBREG_REG (dest);
 
   if (REG_P (dest))
@@ -1735,7 +1735,7 @@ prune_expressions (bool pre_p)
 		 of the tables.  */
 	      if (MEM_P (x))
 		{
-		  if (GET_CODE (XEXP (x, 0)) == SYMBOL_REF
+		  if (SYMBOL_REF_P (XEXP (x, 0))
 		      && CONSTANT_POOL_ADDRESS_P (XEXP (x, 0)))
 		    continue;
 
@@ -3381,7 +3381,7 @@ get_pressure_class_and_nregs (rtx_insn *insn, int *nregs)
   const_rtx set = single_set_gcse (insn);
 
   reg = SET_DEST (set);
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   if (MEM_P (reg))
     {
diff --git a/gcc/genattrtab.c b/gcc/genattrtab.c
index cdf0b5c12dc..912d6b4f594 100644
--- a/gcc/genattrtab.c
+++ b/gcc/genattrtab.c
@@ -739,7 +739,7 @@ check_attr_test (file_location loc, rtx exp, attr_desc *attr)
 	  else
 	    {
 	      for (av = attr2->first_value; av; av = av->next)
-		if (GET_CODE (av->value) == CONST_STRING
+		if (CONST_STRING_P (av->value)
 		    && ! strcmp (XSTR (exp, 1), XSTR (av->value, 0)))
 		  break;
 
@@ -813,8 +813,8 @@ check_attr_test (file_location loc, rtx exp, attr_desc *attr)
     case LE:  case LT:  case GT:  case GE:
     case LEU: case LTU: case GTU: case GEU:
     case NE:  case EQ:
-      if (GET_CODE (XEXP (exp, 0)) == SYMBOL_REF
-	  && GET_CODE (XEXP (exp, 1)) == SYMBOL_REF)
+      if (SYMBOL_REF_P (XEXP (exp, 0))
+	  && SYMBOL_REF_P (XEXP (exp, 1)))
 	exp = attr_rtx (GET_CODE (exp),
 			attr_rtx (SYMBOL_REF, XSTR (XEXP (exp, 0), 0)),
 			attr_rtx (SYMBOL_REF, XSTR (XEXP (exp, 1), 0)));
@@ -892,7 +892,7 @@ check_attr_value (file_location loc, rtx exp, class attr_desc *attr)
 	}
 
       for (av = attr->first_value; av; av = av->next)
-	if (GET_CODE (av->value) == CONST_STRING
+	if (CONST_STRING_P (av->value)
 	    && ! strcmp (XSTR (av->value, 0), XSTR (exp, 0)))
 	  break;
 
@@ -3715,7 +3715,7 @@ write_test_expr (FILE *outf, rtx exp, unsigned int attrs_cached, int flags,
     /* The address of the branch target.  */
     case MATCH_DUP:
       fprintf (outf,
-	       "INSN_ADDRESSES_SET_P () ? INSN_ADDRESSES (INSN_UID (GET_CODE (operands[%d]) == LABEL_REF ? XEXP (operands[%d], 0) : operands[%d])) : 0",
+	       "INSN_ADDRESSES_SET_P () ? INSN_ADDRESSES (INSN_UID (LABEL_REF_P (operands[%d]) ? XEXP (operands[%d], 0) : operands[%d])) : 0",
 	       XINT (exp, 0), XINT (exp, 0), XINT (exp, 0));
       break;
 
@@ -5000,7 +5000,7 @@ make_automaton_attrs (void)
 	{
 	  if (val == tune_attr->default_val)
 	    continue;
-	  gcc_assert (GET_CODE (val->value) == CONST_STRING);
+	  gcc_assert (CONST_STRING_P (val->value));
 	  fprintf (dfa_file,
 		   "extern int internal_dfa_insn_code_%s (rtx_insn *);\n",
 		   XSTR (val->value, 0));
@@ -5012,7 +5012,7 @@ make_automaton_attrs (void)
 	{
 	  if (val == tune_attr->default_val)
 	    continue;
-	  gcc_assert (GET_CODE (val->value) == CONST_STRING);
+	  gcc_assert (CONST_STRING_P (val->value));
 	  fprintf (latency_file,
 		   "extern int insn_default_latency_%s (rtx_insn *);\n",
 		   XSTR (val->value, 0));
@@ -5024,7 +5024,7 @@ make_automaton_attrs (void)
 	{
 	  if (val == tune_attr->default_val)
 	    continue;
-	  gcc_assert (GET_CODE (val->value) == CONST_STRING);
+	  gcc_assert (CONST_STRING_P (val->value));
 	  fprintf (attr_file,
 		   "extern int internal_dfa_insn_code_%s (rtx_insn *);\n"
 		   "extern int insn_default_latency_%s (rtx_insn *);\n",
diff --git a/gcc/genpreds.c b/gcc/genpreds.c
index 556c4bdd869..1d9baea9f44 100644
--- a/gcc/genpreds.c
+++ b/gcc/genpreds.c
@@ -103,7 +103,7 @@ process_define_predicate (md_rtx_info *info)
        (define_predicate "basereg_operand"
          (match_operand 0 "register_operand")
        {
-         if (GET_CODE (op) == SUBREG)
+         if (SUBREG_P (op))
            op = SUBREG_REG (op);
          return REG_POINTER (op);
        })
@@ -112,7 +112,7 @@ process_define_predicate (md_rtx_info *info)
 
        static inline int basereg_operand_1(rtx op, machine_mode mode)
        {
-         if (GET_CODE (op) == SUBREG)
+         if (SUBREG_P (op))
            op = SUBREG_REG (op);
          return REG_POINTER (op);
        }
@@ -1247,14 +1247,14 @@ write_tm_constrs_h (void)
 	  error ("you can't use lval or hval");
 #else
 	if (needs_hval)
-	  puts ("  if (GET_CODE (op) == CONST_DOUBLE && mode == VOIDmode)"
+	  puts ("  if (CONST_DOUBLE_P (op) && mode == VOIDmode)"
 		"    hval = CONST_DOUBLE_HIGH (op);");
 	if (needs_lval)
-	  puts ("  if (GET_CODE (op) == CONST_DOUBLE && mode == VOIDmode)"
+	  puts ("  if (CONST_DOUBLE_P (op) && mode == VOIDmode)"
 		"    lval = CONST_DOUBLE_LOW (op);");
 #endif
 	if (needs_rval)
-	  puts ("  if (GET_CODE (op) == CONST_DOUBLE && mode != VOIDmode)"
+	  puts ("  if (CONST_DOUBLE_P (op) && mode != VOIDmode)"
 		"    rval = CONST_DOUBLE_REAL_VALUE (op);");
 
 	write_predicate_stmts (c->exp);
diff --git a/gcc/genrecog.c b/gcc/genrecog.c
index f20089eeee8..82b00fde9ed 100644
--- a/gcc/genrecog.c
+++ b/gcc/genrecog.c
@@ -1687,7 +1687,7 @@ simplify_tests (state *s)
   for (decision *d = s->first; d; d = d->next)
     {
       uint64_t label;
-      /* Convert checks for GET_CODE (x) == CONST_INT and XWINT (x, 0) == N
+      /* Convert checks for CONST_INT_P (x) and XWINT (x, 0) == N
 	 into checks for const_int_rtx[N'], if N is suitably small.  */
       if (d->test.kind == rtx_test::CODE
 	  && d->if_statement_p (&label)
diff --git a/gcc/gensupport.c b/gcc/gensupport.c
index 1aab7119901..8bdd0ae7502 100644
--- a/gcc/gensupport.c
+++ b/gcc/gensupport.c
@@ -683,7 +683,7 @@ is_predicable (class queue_elem *elem)
 	      || strcmp (XSTR (SET_DEST (sub), 0), "predicable") != 0)
 	    break;
 	  sub = SET_SRC (sub);
-	  if (GET_CODE (sub) == CONST_STRING)
+	  if (CONST_STRING_P (sub))
 	    {
 	      value = XSTR (sub, 0);
 	      goto found;
@@ -775,7 +775,7 @@ has_subst_attribute (class queue_elem *elem, class queue_elem *subst_elem)
 	      || strcmp (XSTR (SET_DEST (cur_attr), 0), subst_name) != 0)
 	    break;
 	  cur_attr = SET_SRC (cur_attr);
-	  if (GET_CODE (cur_attr) == CONST_STRING)
+	  if (CONST_STRING_P (cur_attr))
 	    {
 	      value = XSTR (cur_attr, 0);
 	      goto found;
@@ -1415,7 +1415,7 @@ alter_attrs_for_insn (rtx insn)
 	  if (strcmp (XSTR (SET_DEST (sub), 0), "predicable") == 0)
 	    {
 	      sub = SET_SRC (sub);
-	      if (GET_CODE (sub) == CONST_STRING)
+	      if (CONST_STRING_P (sub))
 		{
 		  predicable_idx = i;
 		  XSTR (sub, 0) = "ce_enabled";
@@ -3156,7 +3156,7 @@ needs_barrier_p (rtx x)
 {
   return (GET_CODE (x) == SET
 	  && GET_CODE (SET_DEST (x)) == PC
-	  && GET_CODE (SET_SRC (x)) == LABEL_REF);
+	  && LABEL_REF_P (SET_SRC (x)));
 }
 
 #define NS "NULL"
diff --git a/gcc/ifcvt.c b/gcc/ifcvt.c
index e0c9522057a..a62b1265599 100644
--- a/gcc/ifcvt.c
+++ b/gcc/ifcvt.c
@@ -310,7 +310,7 @@ rtx_interchangeable_p (const_rtx a, const_rtx b)
   if (!rtx_equal_p (a, b))
     return false;
 
-  if (GET_CODE (a) != MEM)
+  if (!MEM_P (a))
     return true;
 
   /* A dead type-unsafe memory reference is legal, but a live type-unsafe memory
@@ -438,7 +438,7 @@ cond_exec_get_condition (rtx_insn *jump)
 
   /* If this branches to JUMP_LABEL when the condition is false,
      reverse the condition.  */
-  if (GET_CODE (XEXP (test_if, 2)) == LABEL_REF
+  if (LABEL_REF_P (XEXP (test_if, 2))
       && label_ref_label (XEXP (test_if, 2)) == JUMP_LABEL (jump))
     {
       enum rtx_code rev = reversed_comparison_code (cond, jump);
@@ -832,7 +832,7 @@ noce_emit_store_flag (struct noce_if_info *if_info, rtx x, int reversep,
     {
       rtx set = pc_set (if_info->jump);
       cond = XEXP (SET_SRC (set), 0);
-      if (GET_CODE (XEXP (SET_SRC (set), 2)) == LABEL_REF
+      if (LABEL_REF_P (XEXP (SET_SRC (set), 2))
 	  && label_ref_label (XEXP (SET_SRC (set), 2)) == JUMP_LABEL (if_info->jump))
 	reversep = !reversep;
       if (if_info->then_else_reversed)
@@ -905,7 +905,7 @@ noce_emit_move_insn (rtx x, rtx y)
       start_sequence ();
       /* Check that the SET_SRC is reasonable before calling emit_move_insn,
 	 otherwise construct a suitable SET pattern ourselves.  */
-      insn = (OBJECT_P (y) || CONSTANT_P (y) || GET_CODE (y) == SUBREG)
+      insn = (OBJECT_P (y) || CONSTANT_P (y) || SUBREG_P (y))
 	     ? emit_move_insn (x, y)
 	     : emit_insn (gen_rtx_SET (x, y));
       seq = get_insns ();
@@ -1720,7 +1720,7 @@ noce_emit_cmove (struct noce_if_info *if_info, rtx x, enum rtx_code code,
   if (reload_completed)
     return NULL_RTX;
 
-  if (GET_CODE (vtrue) == SUBREG && GET_CODE (vfalse) == SUBREG)
+  if (SUBREG_P (vtrue) && SUBREG_P (vfalse))
     {
       rtx reg_vtrue = SUBREG_REG (vtrue);
       rtx reg_vfalse = SUBREG_REG (vfalse);
@@ -2316,7 +2316,7 @@ noce_get_alt_condition (struct noce_if_info *if_info, rtx target,
   set = pc_set (if_info->jump);
   cond = XEXP (SET_SRC (set), 0);
   reverse
-    = GET_CODE (XEXP (SET_SRC (set), 2)) == LABEL_REF
+    = LABEL_REF_P (XEXP (SET_SRC (set), 2))
       && label_ref_label (XEXP (SET_SRC (set), 2)) == JUMP_LABEL (if_info->jump);
   if (if_info->then_else_reversed)
     reverse = !reverse;
@@ -2630,7 +2630,7 @@ noce_try_abs (struct noce_if_info *if_info)
 	return FALSE;
     }
   if (MEM_P (c)
-      && GET_CODE (XEXP (c, 0)) == SYMBOL_REF
+      && SYMBOL_REF_P (XEXP (c, 0))
       && CONSTANT_POOL_ADDRESS_P (XEXP (c, 0)))
     c = get_pool_constant (XEXP (c, 0));
 
@@ -2928,7 +2928,7 @@ noce_get_condition (rtx_insn *jump, rtx_insn **earliest, bool then_else_reversed
 
   /* If this branches to JUMP_LABEL when the condition is false,
      reverse the condition.  */
-  reverse = (GET_CODE (XEXP (SET_SRC (set), 2)) == LABEL_REF
+  reverse = (LABEL_REF_P (XEXP (SET_SRC (set), 2))
 	     && label_ref_label (XEXP (SET_SRC (set), 2)) == JUMP_LABEL (jump));
 
   /* We may have to reverse because the caller's if block is not canonical,
@@ -3335,7 +3335,7 @@ bb_ok_for_noce_convert_multiple_sets (basic_block test_bb)
 	return false;
 
       if (!(REG_P (src)
-	   || (GET_CODE (src) == SUBREG && REG_P (SUBREG_REG (src))
+	   || (SUBREG_P (src) && REG_P (SUBREG_REG (src))
 	       && subreg_lowpart_p (src))))
 	return false;
 
@@ -3710,7 +3710,7 @@ check_cond_move_block (basic_block bb,
 	 modified earlier in the block.  */
       if ((REG_P (src)
 	   && vals->get (src))
-	  || (GET_CODE (src) == SUBREG && REG_P (SUBREG_REG (src))
+	  || (SUBREG_P (src) && REG_P (SUBREG_REG (src))
 	      && vals->get (SUBREG_REG (src))))
 	return FALSE;
 
diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
index 10673769958..58fcc5de017 100644
--- a/gcc/internal-fn.c
+++ b/gcc/internal-fn.c
@@ -658,7 +658,7 @@ expand_arith_overflow_result_store (tree lhs, rtx target,
 static void
 expand_ubsan_result_store (rtx target, rtx res)
 {
-  if (GET_CODE (target) == SUBREG && SUBREG_PROMOTED_VAR_P (target))
+  if (SUBREG_P (target) && SUBREG_PROMOTED_VAR_P (target))
     /* If this is a scalar in a register that is stored in a wider mode   
        than the declared mode, compute the result into its declared mode
        and then convert to the wider mode.  Our value is the computed
@@ -2898,7 +2898,7 @@ expand_direct_optab_fn (internal_fn fn, gcall *stmt, direct_optab optab,
      guarantee that the instruction will leave the upper bits of the
      register in the state required by SUBREG_PROMOTED_SIGN.  */
   rtx dest = lhs_rtx;
-  if (dest && GET_CODE (dest) == SUBREG && SUBREG_PROMOTED_VAR_P (dest))
+  if (dest && SUBREG_P (dest) && SUBREG_PROMOTED_VAR_P (dest))
     dest = NULL_RTX;
 
   create_output_operand (&ops[0], dest, insn_data[icode].operand[0].mode);
@@ -2926,7 +2926,7 @@ expand_direct_optab_fn (internal_fn fn, gcall *stmt, direct_optab optab,
 
 	 If the return value has a nonintegral type, its mode must match
 	 the instruction result.  */
-      if (GET_CODE (lhs_rtx) == SUBREG && SUBREG_PROMOTED_VAR_P (lhs_rtx))
+      if (SUBREG_P (lhs_rtx) && SUBREG_PROMOTED_VAR_P (lhs_rtx))
 	{
 	  /* If this is a scalar in a register that is stored in a wider
 	     mode than the declared mode, compute the result into its
diff --git a/gcc/ira-build.c b/gcc/ira-build.c
index c7457fa4431..200ed52ad55 100644
--- a/gcc/ira-build.c
+++ b/gcc/ira-build.c
@@ -1850,7 +1850,7 @@ create_insn_allocnos (rtx x, rtx outer, bool output_p)
 	  if ((a = ira_curr_regno_allocno_map[regno]) == NULL)
 	    {
 	      a = ira_create_allocno (regno, false, ira_curr_loop_tree_node);
-	      if (outer != NULL && GET_CODE (outer) == SUBREG)
+	      if (outer != NULL && SUBREG_P (outer))
 		{
 		  machine_mode wmode = GET_MODE (outer);
 		  if (partial_subreg_p (ALLOCNO_WMODE (a), wmode))
diff --git a/gcc/ira-conflicts.c b/gcc/ira-conflicts.c
index 813a6d4103c..fc471bb8770 100644
--- a/gcc/ira-conflicts.c
+++ b/gcc/ira-conflicts.c
@@ -207,7 +207,7 @@ allocnos_conflict_for_copy_p (ira_allocno_t a1, ira_allocno_t a2)
 
 /* Check that X is REG or SUBREG of REG.  */
 #define REG_SUBREG_P(x)							\
-   (REG_P (x) || (GET_CODE (x) == SUBREG && REG_P (SUBREG_REG (x))))
+   (REG_P (x) || (SUBREG_P (x) && REG_P (SUBREG_REG (x))))
 
 /* Return X if X is a REG, otherwise it should be SUBREG of REG and
    the function returns the reg in this case.  *OFFSET will be set to
@@ -220,7 +220,7 @@ go_through_subreg (rtx x, int *offset)
   *offset = 0;
   if (REG_P (x))
     return x;
-  ira_assert (GET_CODE (x) == SUBREG);
+  ira_assert (SUBREG_P (x));
   reg = SUBREG_REG (x);
   ira_assert (REG_P (reg));
   if (REGNO (reg) < FIRST_PSEUDO_REGISTER)
diff --git a/gcc/ira-costs.c b/gcc/ira-costs.c
index c7feaba3718..1838a0ea538 100644
--- a/gcc/ira-costs.c
+++ b/gcc/ira-costs.c
@@ -1297,11 +1297,11 @@ record_operand_costs (rtx_insn *insn, enum reg_class *pref)
       rtx dest = SET_DEST (set);
       rtx src = SET_SRC (set);
 
-      if (GET_CODE (dest) == SUBREG
+      if (SUBREG_P (dest)
 	  && known_eq (GET_MODE_SIZE (GET_MODE (dest)),
 		       GET_MODE_SIZE (GET_MODE (SUBREG_REG (dest)))))
 	dest = SUBREG_REG (dest);
-      if (GET_CODE (src) == SUBREG
+      if (SUBREG_P (src)
 	  && known_eq (GET_MODE_SIZE (GET_MODE (src)),
 		       GET_MODE_SIZE (GET_MODE (SUBREG_REG (src)))))
 	src = SUBREG_REG (src);
@@ -1404,7 +1404,7 @@ record_operand_costs (rtx_insn *insn, enum reg_class *pref)
     {
       memcpy (op_costs[i], init_cost, struct_costs_size);
 
-      if (GET_CODE (recog_data.operand[i]) == SUBREG)
+      if (SUBREG_P (recog_data.operand[i]))
 	recog_data.operand[i] = SUBREG_REG (recog_data.operand[i]);
 
       if (MEM_P (recog_data.operand[i]))
@@ -1475,7 +1475,7 @@ scan_one_insn (rtx_insn *insn)
   if (pat_code == USE || pat_code == CLOBBER)
     {
       rtx x = XEXP (PATTERN (insn), 0);
-      if (GET_CODE (x) == REG
+      if (REG_P (x)
 	  && REGNO (x) >= FIRST_PSEUDO_REGISTER
 	  && have_regs_of_mode[GET_MODE (x)])
         ira_init_register_move_cost_if_necessary (GET_MODE (x));
@@ -1542,7 +1542,7 @@ scan_one_insn (rtx_insn *insn)
     {
       rtx op = recog_data.operand[i];
       
-      if (GET_CODE (op) == SUBREG)
+      if (SUBREG_P (op))
 	op = SUBREG_REG (op);
       if (REG_P (op) && REGNO (op) >= FIRST_PSEUDO_REGISTER)
 	{
diff --git a/gcc/ira-emit.c b/gcc/ira-emit.c
index 255af307b3c..846dd5d501b 100644
--- a/gcc/ira-emit.c
+++ b/gcc/ira-emit.c
@@ -941,7 +941,7 @@ emit_move_list (move_t list, int freq)
 	  if ((set = single_set (insn)) != NULL_RTX)
 	    {
 	      dest = SET_DEST (set);
-	      if (GET_CODE (dest) == SUBREG)
+	      if (SUBREG_P (dest))
 		dest = SUBREG_REG (dest);
 	      ira_assert (REG_P (dest));
 	      regno = REGNO (dest);
diff --git a/gcc/ira-lives.c b/gcc/ira-lives.c
index 2029027125a..f1fae3c1ec7 100644
--- a/gcc/ira-lives.c
+++ b/gcc/ira-lives.c
@@ -416,7 +416,7 @@ mark_ref_live (df_ref ref)
   rtx reg = DF_REF_REG (ref);
   rtx orig_reg = reg;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
 
   if (REGNO (reg) >= FIRST_PSEUDO_REGISTER)
@@ -545,11 +545,11 @@ mark_ref_dead (df_ref def)
   if (DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL))
     return;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
 
   if (DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL)
-      && (GET_CODE (orig_reg) != SUBREG
+      && (!SUBREG_P (orig_reg)
 	  || REGNO (reg) < FIRST_PSEUDO_REGISTER
 	  || !read_modify_subreg_p (orig_reg)))
     return;
@@ -573,7 +573,7 @@ make_pseudo_conflict (rtx reg, enum reg_class cl, rtx dreg, rtx orig_dreg,
   rtx orig_reg = reg;
   ira_allocno_t a;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
 
   if (! REG_P (reg) || REGNO (reg) < FIRST_PSEUDO_REGISTER)
@@ -646,7 +646,7 @@ check_and_make_def_conflict (int alt, int def, enum reg_class def_cl)
   if (def_cl == NO_REGS)
     return;
 
-  if (GET_CODE (dreg) == SUBREG)
+  if (SUBREG_P (dreg))
     dreg = SUBREG_REG (dreg);
 
   if (! REG_P (dreg) || REGNO (dreg) < FIRST_PSEUDO_REGISTER)
@@ -766,7 +766,7 @@ mark_hard_reg_early_clobbers (rtx_insn *insn, bool live_p)
       {
 	rtx dreg = DF_REF_REG (def);
 
-	if (GET_CODE (dreg) == SUBREG)
+	if (SUBREG_P (dreg))
 	  dreg = SUBREG_REG (dreg);
 	if (! REG_P (dreg) || REGNO (dreg) >= FIRST_PSEUDO_REGISTER)
 	  continue;
@@ -875,7 +875,7 @@ ira_implicitly_set_insn_hard_regs (HARD_REG_SET *set,
     {
       op = recog_data.operand[i];
 
-      if (GET_CODE (op) == SUBREG)
+      if (SUBREG_P (op))
 	op = SUBREG_REG (op);
 
       if (GET_CODE (op) == SCRATCH
@@ -934,7 +934,7 @@ process_single_reg_class_operands (bool in_p, int freq)
 
       operand_a = NULL;
 
-      if (GET_CODE (operand) == SUBREG)
+      if (SUBREG_P (operand))
 	operand = SUBREG_REG (operand);
 
       if (REG_P (operand)
@@ -1337,7 +1337,7 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node)
 		{
 		  rtx ureg = DF_REF_REG (use);
 
-		  if (GET_CODE (ureg) == SUBREG)
+		  if (SUBREG_P (ureg))
 		    ureg = SUBREG_REG (ureg);
 		  if (! REG_P (ureg) || REGNO (ureg) >= FIRST_PSEUDO_REGISTER)
 		    continue;
diff --git a/gcc/ira.c b/gcc/ira.c
index c58daba6e79..9f7a72e3fe5 100644
--- a/gcc/ira.c
+++ b/gcc/ira.c
@@ -2206,7 +2206,7 @@ ira_bad_reload_regno_1 (int regno, rtx x)
   enum reg_class pref;
 
   /* We only deal with pseudo regs.  */
-  if (! x || GET_CODE (x) != REG)
+  if (! x || !REG_P (x))
     return false;
 
   x_regno = REGNO (x);
@@ -3323,7 +3323,7 @@ set_paradoxical_subreg (rtx_insn *insn)
   FOR_EACH_SUBRTX (iter, array, PATTERN (insn), NONCONST)
     {
       const_rtx subreg = *iter;
-      if (GET_CODE (subreg) == SUBREG)
+      if (SUBREG_P (subreg))
 	{
 	  const_rtx reg = SUBREG_REG (subreg);
 	  if (REG_P (reg) && paradoxical_subreg_p (subreg))
@@ -3933,12 +3933,12 @@ indirect_jump_optimize (void)
 	      rtx_insn *def_insn = DF_REF_INSN (def);
 	      rtx lab = NULL_RTX;
 	      rtx set = single_set (def_insn);
-	      if (set && GET_CODE (SET_SRC (set)) == LABEL_REF)
+	      if (set && LABEL_REF_P (SET_SRC (set)))
 		lab = SET_SRC (set);
 	      else
 		{
 		  rtx eqnote = find_reg_note (def_insn, REG_EQUAL, NULL_RTX);
-		  if (eqnote && GET_CODE (XEXP (eqnote, 0)) == LABEL_REF)
+		  if (eqnote && LABEL_REF_P (XEXP (eqnote, 0)))
 		    lab = XEXP (eqnote, 0);
 		}
 	      if (lab && validate_replace_rtx (SET_SRC (x), lab, insn))
@@ -4225,7 +4225,7 @@ build_insn_chain (void)
 			   conservative and treat the definition as a partial
 			   definition of the full register rather than a full
 			   definition of a specific part of the register.  */
-			if (GET_CODE (reg) == SUBREG
+			if (SUBREG_P (reg)
 			    && !DF_REF_FLAGS_IS_SET (def, DF_REF_ZERO_EXTRACT)
 			    && get_subreg_tracking_sizes (reg, &outer_size,
 							  &inner_size, &start))
@@ -4321,7 +4321,7 @@ build_insn_chain (void)
 			|| pseudo_for_reload_consideration_p (regno))
 		      {
 			HOST_WIDE_INT outer_size, inner_size, start;
-			if (GET_CODE (reg) == SUBREG
+			if (SUBREG_P (reg)
 			    && !DF_REF_FLAGS_IS_SET (use,
 						     DF_REF_SIGN_EXTRACT
 						     | DF_REF_ZERO_EXTRACT)
diff --git a/gcc/jump.c b/gcc/jump.c
index ce5cee523c3..2ec071a6235 100644
--- a/gcc/jump.c
+++ b/gcc/jump.c
@@ -263,7 +263,7 @@ maybe_propagate_label_ref (rtx_insn *jump_insn, rtx_insn *prev_nonjump_insn)
       if (label_set != NULL
 	  /* The source must be the direct LABEL_REF, not a
 	     PLUS, UNSPEC, IF_THEN_ELSE etc.  */
-	  && GET_CODE (SET_SRC (label_set)) == LABEL_REF
+	  && LABEL_REF_P (SET_SRC (label_set))
 	  && (rtx_equal_p (label_dest, pc_src)
 	      || (GET_CODE (pc_src) == IF_THEN_ELSE
 		  && (rtx_equal_p (label_dest, XEXP (pc_src, 1))
@@ -773,7 +773,7 @@ simplejump_p (const rtx_insn *insn)
   return (JUMP_P (insn)
 	  && GET_CODE (PATTERN (insn)) == SET
 	  && GET_CODE (SET_DEST (PATTERN (insn))) == PC
-	  && GET_CODE (SET_SRC (PATTERN (insn))) == LABEL_REF);
+	  && LABEL_REF_P (SET_SRC (PATTERN (insn))));
 }
 
 /* Return nonzero if INSN is a (possibly) conditional jump
@@ -792,15 +792,15 @@ condjump_p (const rtx_insn *insn)
     return 0;
 
   x = SET_SRC (x);
-  if (GET_CODE (x) == LABEL_REF)
+  if (LABEL_REF_P (x))
     return 1;
   else
     return (GET_CODE (x) == IF_THEN_ELSE
 	    && ((GET_CODE (XEXP (x, 2)) == PC
-		 && (GET_CODE (XEXP (x, 1)) == LABEL_REF
+		 && (LABEL_REF_P (XEXP (x, 1))
 		     || ANY_RETURN_P (XEXP (x, 1))))
 		|| (GET_CODE (XEXP (x, 1)) == PC
-		    && (GET_CODE (XEXP (x, 2)) == LABEL_REF
+		    && (LABEL_REF_P (XEXP (x, 2))
 			|| ANY_RETURN_P (XEXP (x, 2))))));
 }
 
@@ -824,16 +824,16 @@ condjump_in_parallel_p (const rtx_insn *insn)
     return 0;
   if (GET_CODE (SET_DEST (x)) != PC)
     return 0;
-  if (GET_CODE (SET_SRC (x)) == LABEL_REF)
+  if (LABEL_REF_P (SET_SRC (x)))
     return 1;
   if (GET_CODE (SET_SRC (x)) != IF_THEN_ELSE)
     return 0;
   if (XEXP (SET_SRC (x), 2) == pc_rtx
-      && (GET_CODE (XEXP (SET_SRC (x), 1)) == LABEL_REF
+      && (LABEL_REF_P (XEXP (SET_SRC (x), 1))
 	  || ANY_RETURN_P (XEXP (SET_SRC (x), 1))))
     return 1;
   if (XEXP (SET_SRC (x), 1) == pc_rtx
-      && (GET_CODE (XEXP (SET_SRC (x), 2)) == LABEL_REF
+      && (LABEL_REF_P (XEXP (SET_SRC (x), 2))
 	  || ANY_RETURN_P (XEXP (SET_SRC (x), 2))))
     return 1;
   return 0;
@@ -868,7 +868,7 @@ any_uncondjump_p (const rtx_insn *insn)
   const_rtx x = pc_set (insn);
   if (!x)
     return 0;
-  if (GET_CODE (SET_SRC (x)) != LABEL_REF)
+  if (!LABEL_REF_P (SET_SRC (x)))
     return 0;
   if (find_reg_note (insn, REG_NON_LOCAL_GOTO, NULL_RTX))
     return 0;
@@ -911,13 +911,13 @@ condjump_label (const rtx_insn *insn)
   if (!x)
     return NULL_RTX;
   x = SET_SRC (x);
-  if (GET_CODE (x) == LABEL_REF)
+  if (LABEL_REF_P (x))
     return x;
   if (GET_CODE (x) != IF_THEN_ELSE)
     return NULL_RTX;
-  if (XEXP (x, 2) == pc_rtx && GET_CODE (XEXP (x, 1)) == LABEL_REF)
+  if (XEXP (x, 2) == pc_rtx && LABEL_REF_P (XEXP (x, 1)))
     return XEXP (x, 1);
-  if (XEXP (x, 1) == pc_rtx && GET_CODE (XEXP (x, 2)) == LABEL_REF)
+  if (XEXP (x, 1) == pc_rtx && LABEL_REF_P (XEXP (x, 2)))
     return XEXP (x, 2);
   return NULL_RTX;
 }
@@ -1438,7 +1438,7 @@ redirect_exp_1 (rtx *loc, rtx olabel, rtx nlabel, rtx_insn *insn)
       || x == olabel)
     {
       x = redirect_target (nlabel);
-      if (GET_CODE (x) == LABEL_REF && loc == &PATTERN (insn))
+      if (LABEL_REF_P (x) && loc == &PATTERN (insn))
  	x = gen_rtx_SET (pc_rtx, x);
       validate_change (insn, loc, x, 1);
       return;
@@ -1446,7 +1446,7 @@ redirect_exp_1 (rtx *loc, rtx olabel, rtx nlabel, rtx_insn *insn)
 
   if (code == SET && SET_DEST (x) == pc_rtx
       && ANY_RETURN_P (nlabel)
-      && GET_CODE (SET_SRC (x)) == LABEL_REF
+      && LABEL_REF_P (SET_SRC (x))
       && label_ref_label (SET_SRC (x)) == olabel)
     {
       validate_change (insn, loc, nlabel, 1);
@@ -1690,7 +1690,7 @@ rtx_renumbered_equal_p (const_rtx x, const_rtx y)
     return 1;
 
   if ((code == REG || (code == SUBREG && REG_P (SUBREG_REG (x))))
-      && (REG_P (y) || (GET_CODE (y) == SUBREG
+      && (REG_P (y) || (SUBREG_P (y)
 				  && REG_P (SUBREG_REG (y)))))
     {
       int reg_x = -1, reg_y = -1;
@@ -1728,7 +1728,7 @@ rtx_renumbered_equal_p (const_rtx x, const_rtx y)
 	    reg_x = reg_renumber[reg_x];
 	}
 
-      if (GET_CODE (y) == SUBREG)
+      if (SUBREG_P (y))
 	{
 	  reg_y = REGNO (SUBREG_REG (y));
 	  byte_y = SUBREG_BYTE (y);
@@ -1900,7 +1900,7 @@ true_regnum (const_rtx x)
 	return reg_renumber[REGNO (x)];
       return REGNO (x);
     }
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     {
       int base = true_regnum (SUBREG_REG (x));
       if (base >= 0
@@ -1924,7 +1924,7 @@ true_regnum (const_rtx x)
 unsigned int
 reg_or_subregno (const_rtx reg)
 {
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   gcc_assert (REG_P (reg));
   return REGNO (reg);
diff --git a/gcc/loop-doloop.c b/gcc/loop-doloop.c
index 0efe7b449ff..e30d17be3ef 100644
--- a/gcc/loop-doloop.c
+++ b/gcc/loop-doloop.c
@@ -190,7 +190,7 @@ doloop_condition_get (rtx_insn *doloop_pat)
   if (GET_CODE (cmp) != SET
       || SET_DEST (cmp) != pc_rtx
       || GET_CODE (SET_SRC (cmp)) != IF_THEN_ELSE
-      || GET_CODE (XEXP (SET_SRC (cmp), 1)) != LABEL_REF
+      || !LABEL_REF_P (XEXP (SET_SRC (cmp), 1))
       || XEXP (SET_SRC (cmp), 2) != pc_rtx)
     return 0;
 
diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c
index 644ecfc6fbb..a69eec6cda5 100644
--- a/gcc/loop-invariant.c
+++ b/gcc/loop-invariant.c
@@ -1269,7 +1269,7 @@ get_pressure_class_and_nregs (rtx_insn *insn, int *nregs)
   /* Considered invariant insns have only one set.  */
   gcc_assert (set != NULL_RTX);
   reg = SET_DEST (set);
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   if (MEM_P (reg))
     {
@@ -1793,7 +1793,7 @@ move_invariant_reg (class loop *loop, unsigned invno)
 	 need to create a temporary register.  */
       set = single_set (inv->insn);
       reg = dest = SET_DEST (set);
-      if (GET_CODE (reg) == SUBREG)
+      if (SUBREG_P (reg))
 	reg = SUBREG_REG (reg);
       if (REG_P (reg))
 	regno = REGNO (reg);
@@ -2060,7 +2060,7 @@ static void
 mark_reg_store (rtx reg, const_rtx setter ATTRIBUTE_UNUSED,
 		void *data ATTRIBUTE_UNUSED)
 {
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
 
   if (! REG_P (reg))
diff --git a/gcc/loop-iv.c b/gcc/loop-iv.c
index 2274cc3075b..5c39106170b 100644
--- a/gcc/loop-iv.c
+++ b/gcc/loop-iv.c
@@ -219,7 +219,7 @@ simple_reg_p (rtx reg)
 {
   unsigned r;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     {
       if (!subreg_lowpart_p (reg))
 	return false;
@@ -338,7 +338,7 @@ iv_get_reaching_def (rtx_insn *insn, rtx reg, df_ref *def)
   *def = NULL;
   if (!simple_reg_p (reg))
     return GRD_INVALID;
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   gcc_assert (REG_P (reg));
 
@@ -680,7 +680,7 @@ get_biv_step_1 (df_ref def, scalar_int_mode outer_mode, rtx reg,
 
 	     (set x':DI (plus:DI y:DI 1))
 	     (set x:SI (subreg:SI (x':DI)).  */
-	  if (GET_CODE (op0) != SUBREG)
+	  if (!SUBREG_P (op0))
 	    return false;
 	  if (GET_MODE (SUBREG_REG (op0)) != outer_mode)
 	    return false;
@@ -705,7 +705,7 @@ get_biv_step_1 (df_ref def, scalar_int_mode outer_mode, rtx reg,
       return false;
     }
 
-  if (GET_CODE (next) == SUBREG)
+  if (SUBREG_P (next))
     {
       if (!subreg_lowpart_p (next))
 	return false;
@@ -737,7 +737,7 @@ get_biv_step_1 (df_ref def, scalar_int_mode outer_mode, rtx reg,
 			    outer_step))
     return false;
 
-  if (GET_CODE (next) == SUBREG)
+  if (SUBREG_P (next))
     {
       scalar_int_mode amode;
       if (!is_a <scalar_int_mode> (GET_MODE (next), &amode)
@@ -1119,7 +1119,7 @@ iv_analyze_op (rtx_insn *insn, scalar_int_mode mode, rtx op, class rtx_iv *iv)
 
   if (function_invariant_p (op))
     res = GRD_INVARIANT;
-  else if (GET_CODE (op) == SUBREG)
+  else if (SUBREG_P (op))
     {
       scalar_int_mode inner_mode;
       if (!subreg_lowpart_p (op)
@@ -1175,7 +1175,7 @@ iv_analyze (rtx_insn *insn, scalar_int_mode mode, rtx val, class rtx_iv *iv)
      following insns.  */
   if (simple_reg_p (val))
     {
-      if (GET_CODE (val) == SUBREG)
+      if (SUBREG_P (val))
 	reg = SUBREG_REG (val);
       else
 	reg = val;
@@ -1321,7 +1321,7 @@ altered_reg_used (const_rtx x, bitmap alt)
 static void
 mark_altered (rtx expr, const_rtx by ATTRIBUTE_UNUSED, void *alt)
 {
-  if (GET_CODE (expr) == SUBREG)
+  if (SUBREG_P (expr))
     expr = SUBREG_REG (expr);
   if (!REG_P (expr))
     return;
@@ -1502,7 +1502,7 @@ implies_p (rtx a, rtx b)
       op1 = XEXP (a, 1);
 
       if (REG_P (op0)
-	  || (GET_CODE (op0) == SUBREG
+	  || (SUBREG_P (op0)
 	      && REG_P (SUBREG_REG (op0))))
 	{
 	  rtx r = simplify_replace_rtx (b, op0, op1);
@@ -1511,7 +1511,7 @@ implies_p (rtx a, rtx b)
 	}
 
       if (REG_P (op1)
-	  || (GET_CODE (op1) == SUBREG
+	  || (SUBREG_P (op1)
 	      && REG_P (SUBREG_REG (op1))))
 	{
 	  rtx r = simplify_replace_rtx (b, op1, op0);
diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
index 63fccd23fae..b75d13492c5 100644
--- a/gcc/loop-unroll.c
+++ b/gcc/loop-unroll.c
@@ -1419,7 +1419,7 @@ analyze_insn_to_expand_var (class loop *loop, rtx_insn *insn)
     return NULL;
 
   if (!REG_P (dest)
-      && !(GET_CODE (dest) == SUBREG
+      && !(SUBREG_P (dest)
            && REG_P (SUBREG_REG (dest))))
     return NULL;
 
diff --git a/gcc/lower-subreg.c b/gcc/lower-subreg.c
index e1418e5ec51..54a8f78cd7b 100644
--- a/gcc/lower-subreg.c
+++ b/gcc/lower-subreg.c
@@ -300,14 +300,14 @@ init_lower_subreg (void)
 static bool
 simple_move_operand (rtx x)
 {
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     x = SUBREG_REG (x);
 
   if (!OBJECT_P (x))
     return false;
 
-  if (GET_CODE (x) == LABEL_REF
-      || GET_CODE (x) == SYMBOL_REF
+  if (LABEL_REF_P (x)
+      || SYMBOL_REF_P (x)
       || GET_CODE (x) == HIGH
       || GET_CODE (x) == CONST)
     return false;
@@ -492,7 +492,7 @@ find_decomposable_subregs (rtx *loc, enum classify_move_insn *pcmi)
   FOR_EACH_SUBRTX_VAR (iter, array, *loc, NONCONST)
     {
       rtx x = *iter;
-      if (GET_CODE (x) == SUBREG)
+      if (SUBREG_P (x))
 	{
 	  rtx inner = SUBREG_REG (x);
 	  unsigned int regno, outer_size, inner_size, outer_words, inner_words;
@@ -698,7 +698,7 @@ simplify_gen_subreg_concatn (machine_mode outermode, rtx op,
      If OP is a SUBREG of a CONCATN, then it must be a simple mode
      change with the same size and offset 0, or it must extract a
      part.  We shouldn't see anything else here.  */
-  if (GET_CODE (op) == SUBREG && GET_CODE (SUBREG_REG (op)) == CONCATN)
+  if (SUBREG_P (op) && GET_CODE (SUBREG_REG (op)) == CONCATN)
     {
       rtx op2;
 
@@ -757,7 +757,7 @@ resolve_reg_p (rtx x)
 static bool
 resolve_subreg_p (rtx x)
 {
-  if (GET_CODE (x) != SUBREG)
+  if (!SUBREG_P (x))
     return false;
   return resolve_reg_p (SUBREG_REG (x));
 }
@@ -933,7 +933,7 @@ resolve_simple_move (rtx set, rtx_insn *insn)
 	}
     }
 
-  if (GET_CODE (src) == SUBREG
+  if (SUBREG_P (src)
       && resolve_reg_p (SUBREG_REG (src))
       && (maybe_ne (SUBREG_BYTE (src), 0)
 	  || maybe_ne (orig_size, GET_MODE_SIZE (GET_MODE (SUBREG_REG (src))))))
@@ -947,7 +947,7 @@ resolve_simple_move (rtx set, rtx_insn *insn)
   /* Similarly if we are copying to a SUBREG of a decomposed reg where
      the SUBREG is larger than word size.  */
 
-  if (GET_CODE (dest) == SUBREG
+  if (SUBREG_P (dest)
       && resolve_reg_p (SUBREG_REG (dest))
       && (maybe_ne (SUBREG_BYTE (dest), 0)
 	  || maybe_ne (orig_size,
diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
index f2584075937..65ccbf3798f 100644
--- a/gcc/lra-constraints.c
+++ b/gcc/lra-constraints.c
@@ -162,7 +162,7 @@ static int new_insn_uid_start;
 static inline rtx *
 strip_subreg (rtx *loc)
 {
-  return loc && GET_CODE (*loc) == SUBREG ? &SUBREG_REG (*loc) : loc;
+  return loc && SUBREG_P (*loc) ? &SUBREG_REG (*loc) : loc;
 }
 
 /* Return hard regno of REGNO or if it is was not assigned to a hard
@@ -269,7 +269,7 @@ in_class_p (rtx reg, enum reg_class cl, enum reg_class *new_class)
 	  && curr_insn_set != NULL
 	  && ((OBJECT_P (SET_SRC (curr_insn_set))
 	       && ! CONSTANT_P (SET_SRC (curr_insn_set)))
-	      || (GET_CODE (SET_SRC (curr_insn_set)) == SUBREG
+	      || (SUBREG_P (SET_SRC (curr_insn_set))
 		  && OBJECT_P (SUBREG_REG (SET_SRC (curr_insn_set)))
 		  && ! CONSTANT_P (SUBREG_REG (SET_SRC (curr_insn_set)))))))
     /* When we don't know what class will be used finally for reload
@@ -601,7 +601,7 @@ get_reload_reg (enum op_type type, machine_mode mode, rtx original,
 			      GET_MODE_SIZE (mode)))
 		  continue;
 		reg = lowpart_subreg (mode, reg, GET_MODE (reg));
-		if (reg == NULL_RTX || GET_CODE (reg) != SUBREG)
+		if (reg == NULL_RTX || !SUBREG_P (reg))
 		  continue;
 	      }
 	    *result_reg = reg;
@@ -680,7 +680,7 @@ operands_match_p (rtx x, rtx y, int y_hard_regno)
   if (x == y)
     return true;
   if ((code == REG || (code == SUBREG && REG_P (SUBREG_REG (x))))
-      && (REG_P (y) || (GET_CODE (y) == SUBREG && REG_P (SUBREG_REG (y)))))
+      && (REG_P (y) || (SUBREG_P (y) && REG_P (SUBREG_REG (y)))))
     {
       int j;
 
@@ -717,10 +717,10 @@ operands_match_p (rtx x, rtx y, int y_hard_regno)
   if (code == REG && REG_P (y))
     return REGNO (x) == REGNO (y);
 
-  if (code == REG && GET_CODE (y) == SUBREG && REG_P (SUBREG_REG (y))
+  if (code == REG && SUBREG_P (y) && REG_P (SUBREG_REG (y))
       && x == SUBREG_REG (y))
     return true;
-  if (GET_CODE (y) == REG && code == SUBREG && REG_P (SUBREG_REG (x))
+  if (REG_P (y) && code == SUBREG && REG_P (SUBREG_REG (x))
       && SUBREG_REG (x) == y)
     return true;
 
@@ -829,7 +829,7 @@ narrow_reload_pseudo_class (rtx reg, enum reg_class cl)
      registers for several reloads of one insn.	 */
   if (INSN_UID (curr_insn) >= new_insn_uid_start)
     return;
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   if (! REG_P (reg) || (int) REGNO (reg) < new_regno_start)
     return;
@@ -946,7 +946,7 @@ match_reload (signed char out, signed char *ins, signed char *outs,
 	  rtx_insn *clobber = emit_clobber (new_out_reg);
 	  LRA_TEMP_CLOBBER_P (PATTERN (clobber)) = 1;
 	  LRA_SUBREG_P (new_in_reg) = 1;
-	  if (GET_CODE (in_rtx) == SUBREG)
+	  if (SUBREG_P (in_rtx))
 	    {
 	      rtx subreg_reg = SUBREG_REG (in_rtx);
 	      
@@ -1125,7 +1125,7 @@ emit_spill_move (bool to_p, rtx mem_pseudo, rtx val)
       if (! MEM_P (val))
 	{
 	  val = gen_lowpart_SUBREG (GET_MODE (mem_pseudo),
-				    GET_CODE (val) == SUBREG
+				    SUBREG_P (val)
 				    ? SUBREG_REG (val) : val);
 	  LRA_SUBREG_P (val) = 1;
 	}
@@ -1156,9 +1156,9 @@ check_and_process_move (bool *change_p, bool *sec_mem_p ATTRIBUTE_UNUSED)
   lra_assert (curr_insn_set != NULL_RTX);
   dreg = dest = SET_DEST (curr_insn_set);
   sreg = src = SET_SRC (curr_insn_set);
-  if (GET_CODE (dest) == SUBREG)
+  if (SUBREG_P (dest))
     dreg = SUBREG_REG (dest);
-  if (GET_CODE (src) == SUBREG)
+  if (SUBREG_P (src))
     sreg = SUBREG_REG (src);
   if (! (REG_P (dreg) || MEM_P (dreg)) || ! (REG_P (sreg) || MEM_P (sreg)))
     return false;
@@ -1355,7 +1355,7 @@ process_addr_reg (rtx *loc, bool check_only_p, rtx_insn **before, rtx_insn **aft
   machine_mode mode;
   bool subreg_p, before_p = false;
 
-  subreg_p = GET_CODE (*loc) == SUBREG;
+  subreg_p = SUBREG_P (*loc);
   if (subreg_p)
     {
       reg = SUBREG_REG (*loc);
@@ -1483,7 +1483,7 @@ simplify_operand_subreg (int nop, machine_mode reg_mode)
 
   before = after = NULL;
 
-  if (GET_CODE (operand) != SUBREG)
+  if (!SUBREG_P (operand))
     return false;
 
   mode = GET_MODE (operand);
@@ -1961,7 +1961,7 @@ process_alt_operands (int only_alternative)
 
       operand_reg[nop] = reg = op;
       biggest_mode[nop] = GET_MODE (op);
-      if (GET_CODE (op) == SUBREG)
+      if (SUBREG_P (op))
 	{
 	  biggest_mode[nop] = wider_subreg_mode (op);
 	  operand_reg[nop] = reg = SUBREG_REG (op);
@@ -2600,7 +2600,7 @@ process_alt_operands (int only_alternative)
 		  if (curr_static_id->operand[nop].strict_low
 		      && REG_P (op)
 		      && hard_regno[nop] < 0
-		      && GET_CODE (*curr_id->operand_loc[nop]) == SUBREG
+		      && SUBREG_P (*curr_id->operand_loc[nop])
 		      && ira_class_hard_regs_num[this_alternative] > 0
 		      && (!targetm.hard_regno_mode_ok
 			  (ira_class_hard_regs[this_alternative][0],
@@ -2904,7 +2904,7 @@ process_alt_operands (int only_alternative)
 
       if (curr_insn_set != NULL_RTX && n_operands == 2
 	  /* Prevent processing non-move insns.  */
-	  && (GET_CODE (SET_SRC (curr_insn_set)) == SUBREG
+	  && (SUBREG_P (SET_SRC (curr_insn_set))
 	      || SET_SRC (curr_insn_set) == no_subreg_reg_operand[1])
 	  && ((! curr_alt_win[0] && ! curr_alt_win[1]
 	       && REG_P (no_subreg_reg_operand[0])
@@ -3338,7 +3338,7 @@ process_address_1 (int nop, bool check_only_p,
 		&& get_constraint_type (cn) == CT_FIXED_FORM
 	        && constraint_satisfied_p (op, cn)))
     decompose_mem_address (&ad, op);
-  else if (GET_CODE (op) == SUBREG
+  else if (SUBREG_P (op)
 	   && MEM_P (SUBREG_REG (op)))
     decompose_mem_address (&ad, SUBREG_REG (op));
   else
@@ -3886,7 +3886,7 @@ curr_insn_transform (bool check_only_p)
 	  continue;
 	
 	old = op = *curr_id->operand_loc[i];
-	if (GET_CODE (old) == SUBREG)
+	if (SUBREG_P (old))
 	  old = SUBREG_REG (old);
 	subst = get_equiv_with_elimination (old, curr_insn);
 	original_subreg_reg_mode[i] = VOIDmode;
@@ -3896,7 +3896,7 @@ curr_insn_transform (bool check_only_p)
 	    equiv_substition_p[i] = true;
 	    subst = copy_rtx (subst);
 	    lra_assert (REG_P (old));
-	    if (GET_CODE (op) != SUBREG)
+	    if (!SUBREG_P (op))
 	      *curr_id->operand_loc[i] = subst;
 	    else
 	      {
@@ -4154,7 +4154,7 @@ curr_insn_transform (bool check_only_p)
 	enum reg_class new_class;
 	rtx reg = *curr_id->operand_loc[i];
 
-	if (GET_CODE (reg) == SUBREG)
+	if (SUBREG_P (reg))
 	  reg = SUBREG_REG (reg);
 
 	if (REG_P (reg) && (regno = REGNO (reg)) >= FIRST_PSEUDO_REGISTER)
@@ -4176,7 +4176,7 @@ curr_insn_transform (bool check_only_p)
 	rtx subreg = NULL_RTX;
 	machine_mode mode = curr_operand_mode[i];
 
-	if (GET_CODE (op) == SUBREG)
+	if (SUBREG_P (op))
 	  {
 	    subreg = op;
 	    op = SUBREG_REG (op);
@@ -4272,10 +4272,10 @@ curr_insn_transform (bool check_only_p)
 	      && (curr_insn_set == NULL_RTX
 		  || !((REG_P (SET_SRC (curr_insn_set))
 			|| MEM_P (SET_SRC (curr_insn_set))
-			|| GET_CODE (SET_SRC (curr_insn_set)) == SUBREG)
+			|| SUBREG_P (SET_SRC (curr_insn_set)))
 		       && (REG_P (SET_DEST (curr_insn_set))
 			   || MEM_P (SET_DEST (curr_insn_set))
-			   || GET_CODE (SET_DEST (curr_insn_set)) == SUBREG))))
+			   || SUBREG_P (SET_DEST (curr_insn_set))))))
 	    optional_p = true;
 	  else if (goal_alt_matched[i][0] != -1
 		   && curr_static_id->operand[i].type == OP_OUT
@@ -4355,7 +4355,7 @@ curr_insn_transform (bool check_only_p)
 
 	  loc = curr_id->operand_loc[i];
 	  mode = curr_operand_mode[i];
-	  if (GET_CODE (*loc) == SUBREG)
+	  if (SUBREG_P (*loc))
 	    {
 	      reg = SUBREG_REG (*loc);
 	      poly_int64 byte = SUBREG_BYTE (*loc);
@@ -4485,7 +4485,7 @@ curr_insn_transform (bool check_only_p)
 	  lra_assert (REG_P (reg));
 	  regno = REGNO (reg);
 	  op = *curr_id->operand_loc[i]; /* Substitution.  */
-	  if (GET_CODE (op) == SUBREG)
+	  if (SUBREG_P (op))
 	    op = SUBREG_REG (op);
 	  gcc_assert (REG_P (op) && (int) REGNO (op) >= new_regno_start);
 	  bitmap_set_bit (&lra_optional_reload_pseudos, REGNO (op));
@@ -4948,7 +4948,7 @@ lra_constraints (bool first_p)
 	      /* The equivalence pseudo could be set up as SUBREG in a
 		 case when it is a call restore insn in a mode
 		 different from the pseudo mode.  */
-	      if (GET_CODE (dest_reg) == SUBREG)
+	      if (SUBREG_P (dest_reg))
 		dest_reg = SUBREG_REG (dest_reg);
 	      if ((REG_P (dest_reg)
 		   && (x = get_equiv (dest_reg)) != dest_reg
@@ -6765,7 +6765,7 @@ fix_bb_live_info (bitmap live, bitmap removed_pseudos)
 static int
 get_regno (rtx reg)
 {
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   if (REG_P (reg))
     return REGNO (reg);
@@ -6950,7 +6950,7 @@ remove_inheritance_pseudos (bitmap remove_pseudos)
 			  && (restore_rtx
 			      = lra_reg_info[dregno].restore_rtx) != NULL_RTX)
 			{
-			  if (GET_CODE (SET_DEST (set)) == SUBREG)
+			  if (SUBREG_P (SET_DEST (set)))
 			    SUBREG_REG (SET_DEST (set)) = restore_rtx;
 			  else
 			    SET_DEST (set) = restore_rtx;
diff --git a/gcc/lra-eliminations.c b/gcc/lra-eliminations.c
index 943da888848..a9d650dab7e 100644
--- a/gcc/lra-eliminations.c
+++ b/gcc/lra-eliminations.c
@@ -287,7 +287,7 @@ move_plus_up (rtx x)
   rtx subreg_reg;
   machine_mode x_mode, subreg_reg_mode;
   
-  if (GET_CODE (x) != SUBREG || !subreg_lowpart_p (x))
+  if (!SUBREG_P (x) || !subreg_lowpart_p (x))
     return x;
   subreg_reg = SUBREG_REG (x);
   x_mode = GET_MODE (x);
@@ -932,7 +932,7 @@ eliminate_regs_in_insn (rtx_insn *insn, bool replace_p, bool first_p,
 	{
 	  rtx reg = XEXP (plus_cst_src, 0);
 
-	  if (GET_CODE (reg) == SUBREG && subreg_lowpart_p (reg))
+	  if (SUBREG_P (reg) && subreg_lowpart_p (reg))
 	    reg = SUBREG_REG (reg);
 
 	  if (!REG_P (reg) || REGNO (reg) >= FIRST_PSEUDO_REGISTER)
@@ -943,7 +943,7 @@ eliminate_regs_in_insn (rtx_insn *insn, bool replace_p, bool first_p,
     {
       rtx reg = XEXP (plus_cst_src, 0);
 
-      if (GET_CODE (reg) == SUBREG)
+      if (SUBREG_P (reg))
 	reg = SUBREG_REG (reg);
 
       if (REG_P (reg) && (ep = get_elimination (reg)) != NULL)
@@ -964,7 +964,7 @@ eliminate_regs_in_insn (rtx_insn *insn, bool replace_p, bool first_p,
 	      offset = trunc_int_for_mode (offset, GET_MODE (plus_cst_src));
 	    }
 
-	  if (GET_CODE (XEXP (plus_cst_src, 0)) == SUBREG)
+	  if (SUBREG_P (XEXP (plus_cst_src, 0)))
 	    to_rtx = gen_lowpart (GET_MODE (XEXP (plus_cst_src, 0)), to_rtx);
 	  /* If we have a nonzero offset, and the source is already a
 	     simple REG, the following transformation would increase
diff --git a/gcc/lra.c b/gcc/lra.c
index d7593998f97..f7790f2e154 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -384,9 +384,9 @@ lra_emit_add (rtx x, rtx y, rtx z)
 	  base = a1;
 	  index = a2;
 	}
-      if ((base != NULL_RTX && ! (REG_P (base) || GET_CODE (base) == SUBREG))
+      if ((base != NULL_RTX && ! (REG_P (base) || SUBREG_P (base)))
 	  || (index != NULL_RTX
-	      && ! (REG_P (index) || GET_CODE (index) == SUBREG))
+	      && ! (REG_P (index) || SUBREG_P (index)))
 	  || (disp != NULL_RTX && ! CONSTANT_P (disp))
 	  || (scale != NULL_RTX && ! CONSTANT_P (scale)))
 	{
@@ -1459,7 +1459,7 @@ add_regs_to_insn_regno_info (lra_insn_recog_data_t data, rtx x,
   code = GET_CODE (x);
   mode = GET_MODE (x);
   subreg_p = false;
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     {
       mode = wider_subreg_mode (x);
       if (read_modify_subreg_p (x))
diff --git a/gcc/mode-switching.c b/gcc/mode-switching.c
index 2ff21a40081..5e3bacb1e18 100644
--- a/gcc/mode-switching.c
+++ b/gcc/mode-switching.c
@@ -212,7 +212,7 @@ reg_becomes_live (rtx reg, const_rtx setter ATTRIBUTE_UNUSED, void *live)
 {
   int regno;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
 
   if (!REG_P (reg))
@@ -266,7 +266,7 @@ create_pre_exit (int n_entities, int *entity_map, const int *num_modes)
 	    && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) == 1
 	    && NONJUMP_INSN_P ((last_insn = BB_END (src_bb)))
 	    && GET_CODE (PATTERN (last_insn)) == USE
-	    && GET_CODE ((ret_reg = XEXP (PATTERN (last_insn), 0))) == REG)
+	    && REG_P ((ret_reg = XEXP (PATTERN (last_insn), 0))))
 	  {
 	    int ret_start = REGNO (ret_reg);
 	    int nregs = REG_NREGS (ret_reg);
@@ -302,7 +302,7 @@ create_pre_exit (int n_entities, int *entity_map, const int *num_modes)
 		      case USE:
 			/* Skip USEs of multiple return registers.
 			   __builtin_apply pattern is also handled here.  */
-			if (GET_CODE (XEXP (return_copy_pat, 0)) == REG
+			if (REG_P (XEXP (return_copy_pat, 0))
 			    && (targetm.calls.function_value_regno_p
 				(REGNO (XEXP (return_copy_pat, 0)))))
 			  {
@@ -344,7 +344,7 @@ create_pre_exit (int n_entities, int *entity_map, const int *num_modes)
 			       the previous insn is the clobber for
 			       the return register.  */
 			    copy_reg = SET_DEST (return_copy_pat);
-			    if (GET_CODE (copy_reg) == REG
+			    if (REG_P (copy_reg)
 				&& !HARD_REGISTER_NUM_P (REGNO (copy_reg)))
 			      {
 				if (INSN_P (PREV_INSN (return_copy)))
@@ -358,10 +358,10 @@ create_pre_exit (int n_entities, int *entity_map, const int *num_modes)
 			  }
 		      }
 		    copy_reg = SET_DEST (return_copy_pat);
-		    if (GET_CODE (copy_reg) == REG)
+		    if (REG_P (copy_reg))
 		      copy_start = REGNO (copy_reg);
-		    else if (GET_CODE (copy_reg) == SUBREG
-			     && GET_CODE (SUBREG_REG (copy_reg)) == REG)
+		    else if (SUBREG_P (copy_reg)
+			     && REG_P (SUBREG_REG (copy_reg)))
 		      copy_start = REGNO (SUBREG_REG (copy_reg));
 		    else
 		      {
diff --git a/gcc/modulo-sched.c b/gcc/modulo-sched.c
index c355594bb6b..4ebd514daed 100644
--- a/gcc/modulo-sched.c
+++ b/gcc/modulo-sched.c
@@ -1477,7 +1477,7 @@ sms_schedule (void)
                 && !single_set (insn) && GET_CODE (PATTERN (insn)) != USE
                 && !reg_mentioned_p (count_reg, insn))
             || (INSN_P (insn) && (set = single_set (insn))
-                && GET_CODE (SET_DEST (set)) == SUBREG))
+                && SUBREG_P (SET_DEST (set))))
         break;
       }
 
diff --git a/gcc/optabs.c b/gcc/optabs.c
index 06bcaab1f55..77a58bdde2c 100644
--- a/gcc/optabs.c
+++ b/gcc/optabs.c
@@ -208,7 +208,7 @@ widen_operand (rtx op, machine_mode mode, machine_mode oldmode,
      a promoted object differs from our extension.  */
   if (! no_extend
       || !is_a <scalar_int_mode> (mode, &int_mode)
-      || (GET_CODE (op) == SUBREG && SUBREG_PROMOTED_VAR_P (op)
+      || (SUBREG_P (op) && SUBREG_PROMOTED_VAR_P (op)
 	  && SUBREG_CHECK_PROMOTED_SIGN (op, unsignedp)))
     return convert_modes (mode, oldmode, op, unsignedp);
 
@@ -5276,7 +5276,7 @@ debug_optab_libfuncs (void)
 	rtx l = optab_libfunc ((optab) i, (machine_mode) j);
 	if (l)
 	  {
-	    gcc_assert (GET_CODE (l) == SYMBOL_REF);
+	    gcc_assert (SYMBOL_REF_P (l));
 	    fprintf (stderr, "%s\t%s:\t%s\n",
 		     GET_RTX_NAME (optab_to_code ((optab) i)),
 		     GET_MODE_NAME (j),
@@ -5293,7 +5293,7 @@ debug_optab_libfuncs (void)
 					 (machine_mode) k);
 	  if (l)
 	    {
-	      gcc_assert (GET_CODE (l) == SYMBOL_REF);
+	      gcc_assert (SYMBOL_REF_P (l));
 	      fprintf (stderr, "%s\t%s\t%s:\t%s\n",
 		       GET_RTX_NAME (optab_to_code ((optab) i)),
 		       GET_MODE_NAME (j),
diff --git a/gcc/postreload-gcse.c b/gcc/postreload-gcse.c
index e4737670883..429a44a4762 100644
--- a/gcc/postreload-gcse.c
+++ b/gcc/postreload-gcse.c
@@ -618,7 +618,7 @@ find_mem_conflicts (rtx dest, const_rtx setter ATTRIBUTE_UNUSED,
 {
   rtx mem_op = (rtx) data;
 
-  while (GET_CODE (dest) == SUBREG
+  while (SUBREG_P (dest)
 	 || GET_CODE (dest) == ZERO_EXTRACT
 	 || GET_CODE (dest) == STRICT_LOW_PART)
     dest = XEXP (dest, 0);
@@ -733,7 +733,7 @@ record_last_set_info (rtx dest, const_rtx setter ATTRIBUTE_UNUSED, void *data)
 {
   rtx_insn *last_set_insn = (rtx_insn *) data;
 
-  if (GET_CODE (dest) == SUBREG)
+  if (SUBREG_P (dest))
     dest = SUBREG_REG (dest);
 
   if (REG_P (dest))
diff --git a/gcc/postreload.c b/gcc/postreload.c
index 728aa9b0ed5..f1951b305ff 100644
--- a/gcc/postreload.c
+++ b/gcc/postreload.c
@@ -1418,7 +1418,7 @@ reload_combine_note_store (rtx dst, const_rtx set, void *data ATTRIBUTE_UNUSED)
   int i;
   machine_mode mode = GET_MODE (dst);
 
-  if (GET_CODE (dst) == SUBREG)
+  if (SUBREG_P (dst))
     {
       regno = subreg_regno_offset (REGNO (SUBREG_REG (dst)),
 				   GET_MODE (SUBREG_REG (dst)),
@@ -1670,7 +1670,7 @@ move2add_record_mode (rtx reg)
   int regno, nregs;
   machine_mode mode = GET_MODE (reg);
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     {
       regno = subreg_regno (reg);
       nregs = subreg_nregs (reg);
@@ -2053,16 +2053,16 @@ reload_cse_move2add (rtx_insn *first)
 	     (set (REGX) (CONST (PLUS (SYMBOL_REF) (CONST_INT A))))
 	     ...
 	     (set (REGY) (CONST (PLUS (REGX) (CONST_INT B-A))))  */
-	  if ((GET_CODE (src) == SYMBOL_REF
+	  if ((SYMBOL_REF_P (src)
 	       || (GET_CODE (src) == CONST
 		   && GET_CODE (XEXP (src, 0)) == PLUS
-		   && GET_CODE (XEXP (XEXP (src, 0), 0)) == SYMBOL_REF
+		   && SYMBOL_REF_P (XEXP (XEXP (src, 0), 0))
 		   && CONST_INT_P (XEXP (XEXP (src, 0), 1))))
 	      && dbg_cnt (cse2_move2add))
 	    {
 	      rtx sym, off;
 
-	      if (GET_CODE (src) == SYMBOL_REF)
+	      if (SYMBOL_REF_P (src))
 		{
 		  sym = src;
 		  off = const0_rtx;
@@ -2188,7 +2188,7 @@ move2add_note_store (rtx dst, const_rtx set, void *data)
       return;
     }
 
-  if (GET_CODE (dst) == SUBREG)
+  if (SUBREG_P (dst))
     regno = subreg_regno (dst);
   else if (REG_P (dst))
     regno = REGNO (dst);
@@ -2204,14 +2204,14 @@ move2add_note_store (rtx dst, const_rtx set, void *data)
       rtx off;
 
       note = find_reg_equal_equiv_note (insn);
-      if (note && GET_CODE (XEXP (note, 0)) == SYMBOL_REF)
+      if (note && SYMBOL_REF_P (XEXP (note, 0)))
 	{
 	  sym = XEXP (note, 0);
 	  off = const0_rtx;
 	}
       else if (note && GET_CODE (XEXP (note, 0)) == CONST
 	       && GET_CODE (XEXP (XEXP (note, 0), 0)) == PLUS
-	       && GET_CODE (XEXP (XEXP (XEXP (note, 0), 0), 0)) == SYMBOL_REF
+	       && SYMBOL_REF_P (XEXP (XEXP (XEXP (note, 0), 0), 0))
 	       && CONST_INT_P (XEXP (XEXP (XEXP (note, 0), 0), 1)))
 	{
 	  sym = XEXP (XEXP (XEXP (note, 0), 0), 0);
diff --git a/gcc/print-rtl.c b/gcc/print-rtl.c
index 10948efddd9..53212ce878f 100644
--- a/gcc/print-rtl.c
+++ b/gcc/print-rtl.c
@@ -215,7 +215,7 @@ rtx_writer::print_rtx_operand_code_0 (const_rtx in_rtx ATTRIBUTE_UNUSED,
 				      int idx ATTRIBUTE_UNUSED)
 {
 #ifndef GENERATOR_FILE
-  if (idx == 1 && GET_CODE (in_rtx) == SYMBOL_REF)
+  if (idx == 1 && SYMBOL_REF_P (in_rtx))
     {
       int flags = SYMBOL_REF_FLAGS (in_rtx);
       if (flags)
@@ -573,7 +573,7 @@ rtx_writer::print_rtx_operand_code_u (const_rtx in_rtx, int idx)
       rtx sub = XEXP (in_rtx, idx);
       enum rtx_code subc = GET_CODE (sub);
 
-      if (GET_CODE (in_rtx) == LABEL_REF)
+      if (LABEL_REF_P (in_rtx))
 	{
 	  if (subc == NOTE
 	      && NOTE_KIND (sub) == NOTE_INSN_DELETED_LABEL)
diff --git a/gcc/read-rtl-function.c b/gcc/read-rtl-function.c
index f41f54a0d4a..0a3f79c8c36 100644
--- a/gcc/read-rtl-function.c
+++ b/gcc/read-rtl-function.c
@@ -955,7 +955,7 @@ function_reader::read_rtx_operand_u (rtx x, int idx)
 {
   /* In compact mode, the PREV/NEXT insn uids are not dumped, so skip
      the "uu" when reading. */
-  if (is_compact () && GET_CODE (x) != LABEL_REF)
+  if (is_compact () && !LABEL_REF_P (x))
     return;
 
   struct md_name name;
@@ -1446,7 +1446,7 @@ ensure_regno (int regno)
 static rtx
 consolidate_reg (rtx x)
 {
-  gcc_assert (GET_CODE (x) == REG);
+  gcc_assert (REG_P (x));
 
   unsigned int regno = REGNO (x);
 
@@ -1463,7 +1463,7 @@ consolidate_reg (rtx x)
   if (regno_reg_rtx[regno] == NULL)
     regno_reg_rtx[regno] = x;
   /* Use it.  */
-  gcc_assert (GET_CODE (regno_reg_rtx[regno]) == REG);
+  gcc_assert (REG_P (regno_reg_rtx[regno]));
   gcc_assert (REGNO (regno_reg_rtx[regno]) == regno);
   if (GET_MODE (x) == GET_MODE (regno_reg_rtx[regno]))
     return regno_reg_rtx[regno];
diff --git a/gcc/read-rtl.c b/gcc/read-rtl.c
index 3b5d9997603..3dcf35b8f17 100644
--- a/gcc/read-rtl.c
+++ b/gcc/read-rtl.c
@@ -255,7 +255,7 @@ find_int (const char *name)
 static void
 apply_int_iterator (rtx x, unsigned int index, int value)
 {
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     SUBREG_BYTE (x) = value;
   else
     XINT (x, index) = value;
diff --git a/gcc/recog.c b/gcc/recog.c
index a9f584bc0dc..5b6e957b2b4 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -123,7 +123,7 @@ asm_labels_ok (rtx body)
     return true;
 
   for (i = 0; i < ASM_OPERANDS_LABEL_LENGTH (asmop); i++)
-    if (GET_CODE (ASM_OPERANDS_LABEL (asmop, i)) != LABEL_REF)
+    if (!LABEL_REF_P (ASM_OPERANDS_LABEL (asmop, i)))
       return false;
 
   return true;
@@ -1087,7 +1087,7 @@ address_operand (rtx op, machine_mode mode)
 int
 register_operand (rtx op, machine_mode mode)
 {
-  if (GET_CODE (op) == SUBREG)
+  if (SUBREG_P (op))
     {
       rtx sub = SUBREG_REG (op);
 
@@ -1213,7 +1213,7 @@ const_scalar_int_operand (rtx op, machine_mode mode)
 int
 const_double_operand (rtx op, machine_mode mode)
 {
-  return (GET_CODE (op) == CONST_DOUBLE)
+  return (CONST_DOUBLE_P (op))
 	  && (GET_MODE (op) == mode || mode == VOIDmode);
 }
 #else
@@ -1360,7 +1360,7 @@ memory_operand (rtx op, machine_mode mode)
     return 0;
 
   inner = op;
-  if (GET_CODE (inner) == SUBREG)
+  if (SUBREG_P (inner))
     inner = SUBREG_REG (inner);
 
   return (MEM_P (inner) && general_operand (op, mode));
@@ -1374,7 +1374,7 @@ indirect_operand (rtx op, machine_mode mode)
 {
   /* Before reload, a SUBREG isn't in memory (see memory_operand, above).  */
   if (! reload_completed
-      && GET_CODE (op) == SUBREG && MEM_P (SUBREG_REG (op)))
+      && SUBREG_P (op) && MEM_P (SUBREG_REG (op)))
     {
       if (mode != VOIDmode && GET_MODE (op) != mode)
 	return 0;
@@ -2603,7 +2603,7 @@ constrain_operands (int strict, alternative_mask alternatives)
 	  if (UNARY_P (op))
 	    op = XEXP (op, 0);
 
-	  if (GET_CODE (op) == SUBREG)
+	  if (SUBREG_P (op))
 	    {
 	      if (REG_P (SUBREG_REG (op))
 		  && REGNO (SUBREG_REG (op)) < FIRST_PSEUDO_REGISTER)
diff --git a/gcc/ree.c b/gcc/ree.c
index c63e1591ae3..ddd31ab6c06 100644
--- a/gcc/ree.c
+++ b/gcc/ree.c
@@ -268,7 +268,7 @@ update_reg_equal_equiv_notes (rtx_insn *insn, machine_mode new_mode,
 	  rtx orig_src = XEXP (*loc, 0);
 	  /* Update equivalency constants.  Recall that RTL constants are
 	     sign-extended.  */
-	  if (GET_CODE (orig_src) == CONST_INT
+	  if (CONST_INT_P (orig_src)
 	      && HWI_COMPUTABLE_MODE_P (new_mode))
 	    {
 	      if (INTVAL (orig_src) >= 0 || code == SIGN_EXTEND)
@@ -336,7 +336,7 @@ combine_set_extension (ext_cand *cand, rtx_insn *curr_insn, rtx *orig_set)
 
   /* Merge constants by directly moving the constant into the register under
      some conditions.  Recall that RTL constants are sign-extended.  */
-  if (GET_CODE (orig_src) == CONST_INT
+  if (CONST_INT_P (orig_src)
       && HWI_COMPUTABLE_MODE_P (cand->mode))
     {
       if (INTVAL (orig_src) >= 0 || cand->code == SIGN_EXTEND)
@@ -467,7 +467,7 @@ get_defs (rtx_insn *insn, rtx reg, vec<rtx_insn *> *dest)
 
   FOR_EACH_INSN_USE (use, insn)
     {
-      if (GET_CODE (DF_REF_REG (use)) == SUBREG)
+      if (SUBREG_P (DF_REF_REG (use)))
         return NULL;
       if (REGNO (DF_REF_REG (use)) == REGNO (reg))
 	break;
@@ -541,10 +541,10 @@ is_cond_copy_insn (rtx_insn *insn, rtx *reg1, rtx *reg2)
 
   if (expr != NULL_RTX
       && GET_CODE (expr) == SET
-      && GET_CODE (SET_DEST (expr)) == REG
+      && REG_P (SET_DEST (expr))
       && GET_CODE (SET_SRC (expr))  == IF_THEN_ELSE
-      && GET_CODE (XEXP (SET_SRC (expr), 1)) == REG
-      && GET_CODE (XEXP (SET_SRC (expr), 2)) == REG)
+      && REG_P (XEXP (SET_SRC (expr), 1))
+      && REG_P (XEXP (SET_SRC (expr), 2)))
     {
       *reg1 = XEXP (SET_SRC (expr), 1);
       *reg2 = XEXP (SET_SRC (expr), 2);
diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c
index 710f14a9544..45b0b35160a 100644
--- a/gcc/reg-stack.c
+++ b/gcc/reg-stack.c
@@ -492,7 +492,7 @@ check_asm_stack_operands (rtx_insn *insn)
 
   /* Strip SUBREGs here to make the following code simpler.  */
   for (i = 0; i < recog_data.n_operands; i++)
-    if (GET_CODE (recog_data.operand[i]) == SUBREG
+    if (SUBREG_P (recog_data.operand[i])
 	&& REG_P (SUBREG_REG (recog_data.operand[i])))
       recog_data.operand[i] = SUBREG_REG (recog_data.operand[i]);
 
@@ -510,7 +510,7 @@ check_asm_stack_operands (rtx_insn *insn)
 	    rtx clobber = XVECEXP (body, 0, i);
 	    rtx reg = XEXP (clobber, 0);
 
-	    if (GET_CODE (reg) == SUBREG && REG_P (SUBREG_REG (reg)))
+	    if (SUBREG_P (reg) && REG_P (SUBREG_REG (reg)))
 	      reg = SUBREG_REG (reg);
 
 	    if (STACK_REG_P (reg))
@@ -908,7 +908,7 @@ emit_swap_insn (rtx_insn *insn, stack_ptr regstack, rtx reg)
 	i1src = XEXP (i1src, 0);
       if (REG_P (i1dest)
 	  && REGNO (i1dest) == FIRST_STACK_REG
-	  && (MEM_P (i1src) || GET_CODE (i1src) == CONST_DOUBLE)
+	  && (MEM_P (i1src) || CONST_DOUBLE_P (i1src))
 	  && !side_effects_p (i1src)
 	  && hard_regno == FIRST_STACK_REG + 1
 	  && i1 != BB_HEAD (current_block))
@@ -949,7 +949,7 @@ emit_swap_insn (rtx_insn *insn, stack_ptr regstack, rtx reg)
 		 %st to %st(1), consider swapping them.  */
 	      if (REG_P (i2dest)
 		  && REGNO (i2dest) == FIRST_STACK_REG
-		  && (MEM_P (i2src) || GET_CODE (i2src) == CONST_DOUBLE)
+		  && (MEM_P (i2src) || CONST_DOUBLE_P (i2src))
 		  /* Ensure i2 doesn't have other side-effects.  */
 		  && !side_effects_p (i2src)
 		  /* And that the two instructions can actually be
@@ -2139,7 +2139,7 @@ subst_asm_stack_regs (rtx_insn *insn, stack_ptr regstack)
 
   /* Strip SUBREGs here to make the following code simpler.  */
   for (i = 0; i < recog_data.n_operands; i++)
-    if (GET_CODE (recog_data.operand[i]) == SUBREG
+    if (SUBREG_P (recog_data.operand[i])
 	&& REG_P (SUBREG_REG (recog_data.operand[i])))
       {
 	recog_data.operand_loc[i] = & SUBREG_REG (recog_data.operand[i]);
@@ -2163,7 +2163,7 @@ subst_asm_stack_regs (rtx_insn *insn, stack_ptr regstack)
       rtx reg = XEXP (note, 0);
       rtx *loc = & XEXP (note, 0);
 
-      if (GET_CODE (reg) == SUBREG && REG_P (SUBREG_REG (reg)))
+      if (SUBREG_P (reg) && REG_P (SUBREG_REG (reg)))
 	{
 	  loc = & SUBREG_REG (reg);
 	  reg = SUBREG_REG (reg);
@@ -2196,7 +2196,7 @@ subst_asm_stack_regs (rtx_insn *insn, stack_ptr regstack)
 	    rtx reg = XEXP (clobber, 0);
 	    rtx *loc = & XEXP (clobber, 0);
 
-	    if (GET_CODE (reg) == SUBREG && REG_P (SUBREG_REG (reg)))
+	    if (SUBREG_P (reg) && REG_P (SUBREG_REG (reg)))
 	      {
 		loc = & SUBREG_REG (reg);
 		reg = SUBREG_REG (reg);
diff --git a/gcc/regcprop.c b/gcc/regcprop.c
index a18c24f4797..680cf15e719 100644
--- a/gcc/regcprop.c
+++ b/gcc/regcprop.c
@@ -189,7 +189,7 @@ kill_value_regno (unsigned int regno, unsigned int nregs,
 static void
 kill_value (const_rtx x, struct value_data *vd)
 {
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     {
       rtx tmp = simplify_subreg (GET_MODE (x), SUBREG_REG (x),
 				 GET_MODE (SUBREG_REG (x)), SUBREG_BYTE (x));
@@ -541,13 +541,13 @@ replace_oldest_value_addr (rtx *loc, enum reg_class cl,
 	rtx *locB = NULL;
 	enum rtx_code index_code = SCRATCH;
 
-	if (GET_CODE (op0) == SUBREG)
+	if (SUBREG_P (op0))
 	  {
 	    op0 = SUBREG_REG (op0);
 	    code0 = GET_CODE (op0);
 	  }
 
-	if (GET_CODE (op1) == SUBREG)
+	if (SUBREG_P (op1))
 	  {
 	    op1 = SUBREG_REG (op1);
 	    code1 = GET_CODE (op1);
diff --git a/gcc/reginfo.c b/gcc/reginfo.c
index 4832affd436..a3d16005d91 100644
--- a/gcc/reginfo.c
+++ b/gcc/reginfo.c
@@ -1109,7 +1109,7 @@ reg_scan_mark_refs (rtx x, rtx_insn *insn)
     case SET:
       /* Count a set of the destination if it is a register.  */
       for (dest = SET_DEST (x);
-	   GET_CODE (dest) == SUBREG || GET_CODE (dest) == STRICT_LOW_PART
+	   SUBREG_P (dest) || GET_CODE (dest) == STRICT_LOW_PART
 	   || GET_CODE (dest) == ZERO_EXTRACT;
 	   dest = XEXP (dest, 0))
 	;
@@ -1144,21 +1144,21 @@ reg_scan_mark_refs (rtx x, rtx_insn *insn)
 		  && REG_P (XEXP (SET_SRC (x), 0))
 		  && REG_POINTER (XEXP (SET_SRC (x), 0)))
 	      || GET_CODE (SET_SRC (x)) == CONST
-	      || GET_CODE (SET_SRC (x)) == SYMBOL_REF
-	      || GET_CODE (SET_SRC (x)) == LABEL_REF
+	      || SYMBOL_REF_P (SET_SRC (x))
+	      || LABEL_REF_P (SET_SRC (x))
 	      || (GET_CODE (SET_SRC (x)) == HIGH
 		  && (GET_CODE (XEXP (SET_SRC (x), 0)) == CONST
-		      || GET_CODE (XEXP (SET_SRC (x), 0)) == SYMBOL_REF
-		      || GET_CODE (XEXP (SET_SRC (x), 0)) == LABEL_REF))
+		      || SYMBOL_REF_P (XEXP (SET_SRC (x), 0))
+		      || LABEL_REF_P (XEXP (SET_SRC (x), 0))))
 	      || ((GET_CODE (SET_SRC (x)) == PLUS
 		   || GET_CODE (SET_SRC (x)) == LO_SUM)
 		  && (GET_CODE (XEXP (SET_SRC (x), 1)) == CONST
-		      || GET_CODE (XEXP (SET_SRC (x), 1)) == SYMBOL_REF
-		      || GET_CODE (XEXP (SET_SRC (x), 1)) == LABEL_REF))
+		      || SYMBOL_REF_P (XEXP (SET_SRC (x), 1))
+		      || LABEL_REF_P (XEXP (SET_SRC (x), 1))))
 	      || ((note = find_reg_note (insn, REG_EQUAL, 0)) != 0
 		  && (GET_CODE (XEXP (note, 0)) == CONST
-		      || GET_CODE (XEXP (note, 0)) == SYMBOL_REF
-		      || GET_CODE (XEXP (note, 0)) == LABEL_REF))))
+		      || SYMBOL_REF_P (XEXP (note, 0))
+		      || LABEL_REF_P (XEXP (note, 0))))))
 	REG_POINTER (SET_DEST (x)) = 1;
 
       /* If this is setting a register from a register or from a simple
diff --git a/gcc/regrename.c b/gcc/regrename.c
index 73c0ceda341..0ce4d180880 100644
--- a/gcc/regrename.c
+++ b/gcc/regrename.c
@@ -1055,7 +1055,7 @@ note_sets_clobbers (rtx x, const_rtx set, void *data)
   enum rtx_code code = *(enum rtx_code *)data;
   class du_head *chain;
 
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     x = SUBREG_REG (x);
   if (!REG_P (x) || GET_CODE (set) != code)
     return;
@@ -1088,8 +1088,8 @@ scan_rtx_reg (rtx_insn *insn, rtx *loc, enum reg_class cl, enum scan_actions act
 	     a single output.  */
 	  if (recog_data.n_operands == 2
 	      && GET_CODE (pat) == SET
-	      && GET_CODE (SET_DEST (pat)) == REG
-	      && GET_CODE (SET_SRC (pat)) == REG
+	      && REG_P (SET_DEST (pat))
+	      && REG_P (SET_SRC (pat))
 	      && terminated_this_insn
 	      && terminated_this_insn->nregs
 		 == REG_NREGS (recog_data.operand[1]))
@@ -1291,13 +1291,13 @@ scan_rtx_address (rtx_insn *insn, rtx *loc, enum reg_class cl,
 	rtx *locB = NULL;
 	enum rtx_code index_code = SCRATCH;
 
-	if (GET_CODE (op0) == SUBREG)
+	if (SUBREG_P (op0))
 	  {
 	    op0 = SUBREG_REG (op0);
 	    code0 = GET_CODE (op0);
 	  }
 
-	if (GET_CODE (op1) == SUBREG)
+	if (SUBREG_P (op1))
 	  {
 	    op1 = SUBREG_REG (op1);
 	    code1 = GET_CODE (op1);
diff --git a/gcc/reload.c b/gcc/reload.c
index 72cc38a0e09..52233eb55e1 100644
--- a/gcc/reload.c
+++ b/gcc/reload.c
@@ -840,7 +840,7 @@ reload_inner_reg_of_subreg (rtx x, machine_mode mode, bool output)
   rtx inner;
 
   /* Only SUBREGs are problematical.  */
-  if (GET_CODE (x) != SUBREG)
+  if (!SUBREG_P (x))
     return false;
 
   inner = SUBREG_REG (x);
@@ -1051,7 +1051,7 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
      no choice, so we hope we do get the right register class there.  */
 
   scalar_int_mode inner_mode;
-  if (in != 0 && GET_CODE (in) == SUBREG
+  if (in != 0 && SUBREG_P (in)
       && (subreg_lowpart_p (in) || strict_low)
       && targetm.can_change_mode_class (GET_MODE (SUBREG_REG (in)),
 					inmode, rclass)
@@ -1149,7 +1149,7 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
      entitled to clobber it all (except in the case of a word mode subreg
      or of a STRICT_LOW_PART, in that latter case the constraint should
      label it input-output.)  */
-  if (out != 0 && GET_CODE (out) == SUBREG
+  if (out != 0 && SUBREG_P (out)
       && (subreg_lowpart_p (out) || strict_low)
       && targetm.can_change_mode_class (GET_MODE (SUBREG_REG (out)),
 					outmode, rclass)
@@ -1232,13 +1232,13 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
   /* If IN is a SUBREG of a hard register, make a new REG.  This
      simplifies some of the cases below.  */
 
-  if (in != 0 && GET_CODE (in) == SUBREG && REG_P (SUBREG_REG (in))
+  if (in != 0 && SUBREG_P (in) && REG_P (SUBREG_REG (in))
       && REGNO (SUBREG_REG (in)) < FIRST_PSEUDO_REGISTER
       && ! dont_remove_subreg)
     in = gen_rtx_REG (GET_MODE (in), subreg_regno (in));
 
   /* Similarly for OUT.  */
-  if (out != 0 && GET_CODE (out) == SUBREG
+  if (out != 0 && SUBREG_P (out)
       && REG_P (SUBREG_REG (out))
       && REGNO (SUBREG_REG (out)) < FIRST_PSEUDO_REGISTER
       && ! dont_remove_subreg)
@@ -1270,12 +1270,12 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
 #ifdef LIMIT_RELOAD_CLASS
   if (in_subreg_loc)
     rclass = LIMIT_RELOAD_CLASS (inmode, rclass);
-  else if (in != 0 && GET_CODE (in) == SUBREG)
+  else if (in != 0 && SUBREG_P (in))
     rclass = LIMIT_RELOAD_CLASS (GET_MODE (SUBREG_REG (in)), rclass);
 
   if (out_subreg_loc)
     rclass = LIMIT_RELOAD_CLASS (outmode, rclass);
-  if (out != 0 && GET_CODE (out) == SUBREG)
+  if (out != 0 && SUBREG_P (out))
     rclass = LIMIT_RELOAD_CLASS (GET_MODE (SUBREG_REG (out)), rclass);
 #endif
 
@@ -1351,7 +1351,7 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
       if (subreg_in_class == NO_REGS
 	  && in != 0
 	  && (REG_P (in)
-	      || (GET_CODE (in) == SUBREG && REG_P (SUBREG_REG (in))))
+	      || (SUBREG_P (in) && REG_P (SUBREG_REG (in))))
 	  && reg_or_subregno (in) < FIRST_PSEUDO_REGISTER)
 	subreg_in_class = REGNO_REG_CLASS (reg_or_subregno (in));
       /* If a memory location is needed for the copy, make one.  */
@@ -1383,7 +1383,7 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
 
       if (out != 0
           && (REG_P (out)
-	      || (GET_CODE (out) == SUBREG && REG_P (SUBREG_REG (out))))
+	      || (SUBREG_P (out) && REG_P (SUBREG_REG (out))))
 	  && reg_or_subregno (out) < FIRST_PSEUDO_REGISTER
 	  && (targetm.secondary_memory_needed
 	      (outmode, rclass, REGNO_REG_CLASS (reg_or_subregno (out)))))
@@ -1597,7 +1597,7 @@ push_reload (rtx in, rtx out, rtx *inloc, rtx *outloc,
 	       Is there any simple coherent way to describe the two together?
 	       What's going on here.  */
 	    && (in != out
-		|| (GET_CODE (in) == SUBREG
+		|| (SUBREG_P (in)
 		    && (known_equal_after_align_up
 			(GET_MODE_SIZE (GET_MODE (in)),
 			 GET_MODE_SIZE (GET_MODE (SUBREG_REG (in))),
@@ -1951,7 +1951,7 @@ find_dummy_reload (rtx real_in, rtx real_out, rtx *inloc, rtx *outloc,
      respectively refers to a hard register.  */
 
   /* Find the inside of any subregs.  */
-  while (GET_CODE (out) == SUBREG)
+  while (SUBREG_P (out))
     {
       if (REG_P (SUBREG_REG (out))
 	  && REGNO (SUBREG_REG (out)) < FIRST_PSEUDO_REGISTER)
@@ -1961,7 +1961,7 @@ find_dummy_reload (rtx real_in, rtx real_out, rtx *inloc, rtx *outloc,
 					   GET_MODE (out));
       out = SUBREG_REG (out);
     }
-  while (GET_CODE (in) == SUBREG)
+  while (SUBREG_P (in))
     {
       if (REG_P (SUBREG_REG (in))
 	  && REGNO (SUBREG_REG (in)) < FIRST_PSEUDO_REGISTER)
@@ -2134,7 +2134,7 @@ hard_reg_set_here_p (unsigned int beg_regno, unsigned int end_regno, rtx x)
     {
       rtx op0 = SET_DEST (x);
 
-      while (GET_CODE (op0) == SUBREG)
+      while (SUBREG_P (op0))
 	op0 = SUBREG_REG (op0);
       if (REG_P (op0))
 	{
@@ -2204,7 +2204,7 @@ operands_match_p (rtx x, rtx y)
   if (x == y)
     return 1;
   if ((code == REG || (code == SUBREG && REG_P (SUBREG_REG (x))))
-      && (REG_P (y) || (GET_CODE (y) == SUBREG
+      && (REG_P (y) || (SUBREG_P (y)
 				  && REG_P (SUBREG_REG (y)))))
     {
       int j;
@@ -2222,7 +2222,7 @@ operands_match_p (rtx x, rtx y)
       else
 	i = REGNO (x);
 
-      if (GET_CODE (y) == SUBREG)
+      if (SUBREG_P (y))
 	{
 	  j = REGNO (SUBREG_REG (y));
 	  if (j >= FIRST_PSEUDO_REGISTER)
@@ -2836,7 +2836,7 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known,
 	  /* If we now have a simple operand where we used to have a
 	     PLUS or MULT, re-recognize and try again.  */
 	  if ((OBJECT_P (*recog_data.operand_loc[i])
-	       || GET_CODE (*recog_data.operand_loc[i]) == SUBREG)
+	       || SUBREG_P (*recog_data.operand_loc[i]))
 	      && (GET_CODE (recog_data.operand[i]) == MULT
 		  || GET_CODE (recog_data.operand[i]) == PLUS))
 	    {
@@ -3073,7 +3073,7 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known,
 		 the REG or MEM (or maybe even a constant) within.
 		 (Constants can occur as a result of reg_equiv_constant.)  */
 
-	      while (GET_CODE (operand) == SUBREG)
+	      while (SUBREG_P (operand))
 		{
 		  /* Offset only matters when operand is a REG and
 		     it is a hard reg.  This is because it is passed
@@ -3692,7 +3692,7 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known,
 			 it's costly to reload it, so reload the input instead.  */
 		      if (small_register_class_p (this_alternative[i])
 			  && (REG_P (recog_data.operand[j])
-			      || GET_CODE (recog_data.operand[j]) == SUBREG))
+			      || SUBREG_P (recog_data.operand[j])))
 			{
 			  losers++;
 			  this_alternative_win[j] = 0;
@@ -3907,7 +3907,7 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known,
 
 	/* Reloads of SUBREGs of CONSTANT RTXs are handled later in
 	   push_reload so we have to let them pass here.  */
-	if (GET_CODE (op) == SUBREG)
+	if (SUBREG_P (op))
 	  {
 	    subreg = op;
 	    op = SUBREG_REG (op);
@@ -4086,7 +4086,7 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known,
 
 	rtx operand = recog_data.operand[i];
 
-	while (GET_CODE (operand) == SUBREG)
+	while (SUBREG_P (operand))
 	  operand = SUBREG_REG (operand);
 	if ((MEM_P (operand)
 	     || (REG_P (operand)
@@ -4136,7 +4136,7 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known,
 	  {
 	    operand = *recog_data.operand_loc[i];
 
-	    while (GET_CODE (operand) == SUBREG)
+	    while (SUBREG_P (operand))
 	      operand = SUBREG_REG (operand);
 	    if (REG_P (operand))
 	      {
@@ -4163,7 +4163,7 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known,
 
 	rtx operand = recog_data.operand[i];
 
-	while (GET_CODE (operand) == SUBREG)
+	while (SUBREG_P (operand))
 	  operand = SUBREG_REG (operand);
 	if ((MEM_P (operand)
 	     || (REG_P (operand)
@@ -4200,7 +4200,7 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known,
 	  /* If we're replacing an operand with a LABEL_REF, we need to
 	     make sure that there's a REG_LABEL_OPERAND note attached to
 	     this instruction.  */
-	  if (GET_CODE (substitution) == LABEL_REF
+	  if (LABEL_REF_P (substitution)
 	      && !find_reg_note (insn, REG_LABEL_OPERAND,
 				 label_ref_label (substitution))
 	      /* For a JUMP_P, if it was a branch target it must have
@@ -5035,7 +5035,7 @@ find_reloads_address (machine_mode mode, rtx *memrefloc, rtx ad,
 	 taken care of above.  */
 
       if (ind_levels == 0
-	  || (GET_CODE (XEXP (tem, 0)) == SYMBOL_REF && ! indirect_symref_ok)
+	  || (SYMBOL_REF_P (XEXP (tem, 0)) && ! indirect_symref_ok)
 	  || MEM_P (XEXP (tem, 0))
 	  || ! (REG_P (XEXP (tem, 0))
 		|| (GET_CODE (XEXP (tem, 0)) == PLUS
@@ -5229,7 +5229,7 @@ find_reloads_address (machine_mode mode, rtx *memrefloc, rtx ad,
 
       /* If AD is an address in the constant pool, the MEM rtx may be shared.
 	 Unshare it so we can safely alter it.  */
-      if (memrefloc && GET_CODE (ad) == SYMBOL_REF
+      if (memrefloc && SYMBOL_REF_P (ad)
 	  && CONSTANT_POOL_ADDRESS_P (ad))
 	{
 	  *memrefloc = copy_rtx (*memrefloc);
@@ -5505,7 +5505,7 @@ find_reloads_address_1 (machine_mode mode, addr_space_t as,
 	rtx op0 = orig_op0;
 	rtx op1 = orig_op1;
 
-	if (GET_CODE (op0) == SUBREG)
+	if (SUBREG_P (op0))
 	  {
 	    op0 = SUBREG_REG (op0);
 	    code0 = GET_CODE (op0);
@@ -5518,7 +5518,7 @@ find_reloads_address_1 (machine_mode mode, addr_space_t as,
 						       GET_MODE (orig_op0))));
 	  }
 
-	if (GET_CODE (op1) == SUBREG)
+	if (SUBREG_P (op1))
 	  {
 	    op1 = SUBREG_REG (op1);
 	    code1 = GET_CODE (op1);
@@ -6266,7 +6266,7 @@ subst_reloads (rtx_insn *insn)
 	  /* If we're replacing a LABEL_REF with a register, there must
 	     already be an indication (to e.g. flow) which label this
 	     register refers to.  */
-	  gcc_assert (GET_CODE (*r->where) != LABEL_REF
+	  gcc_assert (!LABEL_REF_P (*r->where)
 		      || !JUMP_P (insn)
 		      || find_reg_note (insn,
 					REG_LABEL_OPERAND,
@@ -6362,7 +6362,7 @@ find_replacement (rtx *loc)
 
 	  return reloadreg;
 	}
-      else if (reloadreg && GET_CODE (*loc) == SUBREG
+      else if (reloadreg && SUBREG_P (*loc)
 	       && r->where == &SUBREG_REG (*loc))
 	{
 	  if (r->mode != VOIDmode && GET_MODE (reloadreg) != r->mode)
@@ -6455,7 +6455,7 @@ refers_to_regno_for_reload_p (unsigned int regno, unsigned int endregno,
 	  /* Note setting a SUBREG counts as referring to the REG it is in for
 	     a pseudo but not for hard registers since we can
 	     treat each word individually.  */
-	  && ((GET_CODE (SET_DEST (x)) == SUBREG
+	  && ((SUBREG_P (SET_DEST (x))
 	       && loc != &SUBREG_REG (SET_DEST (x))
 	       && REG_P (SUBREG_REG (SET_DEST (x)))
 	       && REGNO (SUBREG_REG (SET_DEST (x))) >= FIRST_PSEUDO_REGISTER
@@ -6531,9 +6531,9 @@ reg_overlap_mentioned_for_reload_p (rtx x, rtx in)
   /* If either argument is a constant, then modifying X cannot affect IN.  */
   if (CONSTANT_P (x) || CONSTANT_P (in))
     return 0;
-  else if (GET_CODE (x) == SUBREG && MEM_P (SUBREG_REG (x)))
+  else if (SUBREG_P (x) && MEM_P (SUBREG_REG (x)))
     return refers_to_mem_for_reload_p (in);
-  else if (GET_CODE (x) == SUBREG)
+  else if (SUBREG_P (x))
     {
       regno = REGNO (SUBREG_REG (x));
       if (regno < FIRST_PSEUDO_REGISTER)
@@ -6942,7 +6942,7 @@ find_equiv_reg (rtx goal, rtx_insn *insn, enum reg_class rclass, int other,
 	  if (GET_CODE (pat) == SET || GET_CODE (pat) == CLOBBER)
 	    {
 	      rtx dest = SET_DEST (pat);
-	      while (GET_CODE (dest) == SUBREG
+	      while (SUBREG_P (dest)
 		     || GET_CODE (dest) == ZERO_EXTRACT
 		     || GET_CODE (dest) == STRICT_LOW_PART)
 		dest = XEXP (dest, 0);
@@ -6981,7 +6981,7 @@ find_equiv_reg (rtx goal, rtx_insn *insn, enum reg_class rclass, int other,
 		  if (GET_CODE (v1) == SET || GET_CODE (v1) == CLOBBER)
 		    {
 		      rtx dest = SET_DEST (v1);
-		      while (GET_CODE (dest) == SUBREG
+		      while (SUBREG_P (dest)
 			     || GET_CODE (dest) == ZERO_EXTRACT
 			     || GET_CODE (dest) == STRICT_LOW_PART)
 			dest = XEXP (dest, 0);
diff --git a/gcc/reload1.c b/gcc/reload1.c
index 38ee356a791..d30badc0c4f 100644
--- a/gcc/reload1.c
+++ b/gcc/reload1.c
@@ -2436,13 +2436,13 @@ set_label_offsets (rtx x, rtx_insn *insn, int initial_p)
 
 	case IF_THEN_ELSE:
 	  tem = XEXP (SET_SRC (x), 1);
-	  if (GET_CODE (tem) == LABEL_REF)
+	  if (LABEL_REF_P (tem))
 	    set_label_offsets (label_ref_label (tem), insn, initial_p);
 	  else if (GET_CODE (tem) != PC && GET_CODE (tem) != RETURN)
 	    break;
 
 	  tem = XEXP (SET_SRC (x), 2);
-	  if (GET_CODE (tem) == LABEL_REF)
+	  if (LABEL_REF_P (tem))
 	    set_label_offsets (label_ref_label (tem), insn, initial_p);
 	  else if (GET_CODE (tem) != PC && GET_CODE (tem) != RETURN)
 	    break;
@@ -2850,7 +2850,7 @@ eliminate_regs_1 (rtx x, machine_mode mem_mode, rtx insn,
 		  || known_eq (x_size, new_size))
 	      )
 	    return adjust_address_nv (new_rtx, GET_MODE (x), SUBREG_BYTE (x));
-	  else if (insn && GET_CODE (insn) == DEBUG_INSN)
+	  else if (insn && DEBUG_INSN_P (insn))
 	    return gen_rtx_raw_SUBREG (GET_MODE (x), new_rtx, SUBREG_BYTE (x));
 	  else
 	    return gen_rtx_SUBREG (GET_MODE (x), new_rtx, SUBREG_BYTE (x));
@@ -3270,7 +3270,7 @@ eliminate_regs_in_insn (rtx_insn *insn, int replace)
       if (plus_cst_src)
 	{
 	  rtx reg = XEXP (plus_cst_src, 0);
-	  if (GET_CODE (reg) == SUBREG && subreg_lowpart_p (reg))
+	  if (SUBREG_P (reg) && subreg_lowpart_p (reg))
 	    reg = SUBREG_REG (reg);
 
 	  if (!REG_P (reg) || REGNO (reg) >= FIRST_PSEUDO_REGISTER)
@@ -3282,7 +3282,7 @@ eliminate_regs_in_insn (rtx_insn *insn, int replace)
       rtx reg = XEXP (plus_cst_src, 0);
       poly_int64 offset = INTVAL (XEXP (plus_cst_src, 1));
 
-      if (GET_CODE (reg) == SUBREG)
+      if (SUBREG_P (reg))
 	reg = SUBREG_REG (reg);
 
       for (ep = reg_eliminate; ep < &reg_eliminate[NUM_ELIMINABLE_REGS]; ep++)
@@ -3292,7 +3292,7 @@ eliminate_regs_in_insn (rtx_insn *insn, int replace)
 	    offset += ep->offset;
 	    offset = trunc_int_for_mode (offset, GET_MODE (plus_cst_src));
 
-	    if (GET_CODE (XEXP (plus_cst_src, 0)) == SUBREG)
+	    if (SUBREG_P (XEXP (plus_cst_src, 0)))
 	      to_rtx = gen_lowpart (GET_MODE (XEXP (plus_cst_src, 0)),
 				    to_rtx);
 	    /* If we have a nonzero offset, and the source is already
@@ -3721,7 +3721,7 @@ mark_not_eliminable (rtx dest, const_rtx x, void *data ATTRIBUTE_UNUSED)
   /* A SUBREG of a hard register here is just changing its mode.  We should
      not see a SUBREG of an eliminable hard register, but check just in
      case.  */
-  if (GET_CODE (dest) == SUBREG)
+  if (SUBREG_P (dest))
     dest = SUBREG_REG (dest);
 
   if (dest == hard_frame_pointer_rtx)
@@ -4422,7 +4422,7 @@ strip_paradoxical_subreg (rtx *op_ptr, rtx *other_ptr)
      rather than simplifying it to another hard register, then the
      mode change cannot be properly represented.  For example, OTHER
      might be valid in its current mode, but not in the new one.  */
-  if (GET_CODE (tem) == SUBREG
+  if (SUBREG_P (tem)
       && REG_P (other)
       && HARD_REGISTER_P (other))
     return false;
@@ -4829,7 +4829,7 @@ forget_old_reloads_1 (rtx x, const_rtx setter,
 
   /* note_stores does give us subregs of hard regs,
      subreg_regno_offset requires a hard reg.  */
-  while (GET_CODE (x) == SUBREG)
+  while (SUBREG_P (x))
     {
       /* We ignore the subreg offset when calculating the regno,
 	 because we are using the entire underlying hard register
@@ -5589,10 +5589,10 @@ gen_reload_chain_without_interm_reg_p (int r1, int r2)
 
   if (GET_CODE (in) == PLUS
       && (REG_P (XEXP (in, 0))
-	  || GET_CODE (XEXP (in, 0)) == SUBREG
+	  || SUBREG_P (XEXP (in, 0))
 	  || MEM_P (XEXP (in, 0)))
       && (REG_P (XEXP (in, 1))
-	  || GET_CODE (XEXP (in, 1)) == SUBREG
+	  || SUBREG_P (XEXP (in, 1))
 	  || CONSTANT_P (XEXP (in, 1))
 	  || MEM_P (XEXP (in, 1))))
     {
@@ -5994,7 +5994,7 @@ function_invariant_p (const_rtx x)
     return 1;
   if (GET_CODE (x) == PLUS
       && (XEXP (x, 0) == frame_pointer_rtx || XEXP (x, 0) == arg_pointer_rtx)
-      && GET_CODE (XEXP (x, 1)) == CONST_INT)
+      && CONST_INT_P (XEXP (x, 1)))
     return 1;
   return 0;
 }
@@ -6281,7 +6281,7 @@ choose_reload_regs_init (class insn_chain *chain, rtx *save_reload_reg_rtx)
 static rtx
 replaced_subreg (rtx x)
 {
-  if (GET_CODE (x) == SUBREG)
+  if (SUBREG_P (x))
     return find_replacement (&SUBREG_REG (x));
   return x;
 }
@@ -6467,7 +6467,7 @@ choose_reload_regs (class insn_chain *chain)
 		  regno = REGNO (rld[r].in_reg);
 		  mode = GET_MODE (rld[r].in_reg);
 		}
-	      else if (GET_CODE (rld[r].in_reg) == SUBREG
+	      else if (SUBREG_P (rld[r].in_reg)
 		       && REG_P (SUBREG_REG (rld[r].in_reg)))
 		{
 		  regno = REGNO (SUBREG_REG (rld[r].in_reg));
@@ -6493,7 +6493,7 @@ choose_reload_regs (class insn_chain *chain)
 	      /* This won't work, since REGNO can be a pseudo reg number.
 		 Also, it takes much more hair to keep track of all the things
 		 that can invalidate an inherited reload of part of a pseudoreg.  */
-	      else if (GET_CODE (rld[r].in) == SUBREG
+	      else if (SUBREG_P (rld[r].in)
 		       && REG_P (SUBREG_REG (rld[r].in)))
 		regno = subreg_regno (rld[r].in);
 #endif
@@ -6663,7 +6663,7 @@ choose_reload_regs (class insn_chain *chain)
 			 Make a new REG since this might be used in an
 			 address and not all machines support SUBREGs
 			 there.  */
-		      gcc_assert (GET_CODE (equiv) == SUBREG);
+		      gcc_assert (SUBREG_P (equiv));
 		      regno = subreg_regno (equiv);
 		      equiv = gen_rtx_REG (rld[r].mode, regno);
 		      /* If we choose EQUIV as the reload register, but the
@@ -6914,7 +6914,7 @@ choose_reload_regs (class insn_chain *chain)
 	    check_reg = rld[r].reg_rtx;
 	  else if (reload_override_in[r]
 		   && (REG_P (reload_override_in[r])
-		       || GET_CODE (reload_override_in[r]) == SUBREG))
+		       || SUBREG_P (reload_override_in[r])))
 	    check_reg = reload_override_in[r];
 	  else
 	    continue;
@@ -7131,7 +7131,7 @@ emit_input_reload_insns (class insn_chain *chain, struct reload *rl,
      determine whether a secondary reload is needed.  */
   if (reload_override_in[j]
       && (REG_P (rl->in_reg)
-	  || (GET_CODE (rl->in_reg) == SUBREG
+	  || (SUBREG_P (rl->in_reg)
 	      && REG_P (SUBREG_REG (rl->in_reg)))))
     {
       oldequiv = old;
@@ -7141,7 +7141,7 @@ emit_input_reload_insns (class insn_chain *chain, struct reload *rl,
     oldequiv = old;
   else if (REG_P (oldequiv))
     oldequiv_reg = oldequiv;
-  else if (GET_CODE (oldequiv) == SUBREG)
+  else if (SUBREG_P (oldequiv))
     oldequiv_reg = SUBREG_REG (oldequiv);
 
   reloadreg = reload_reg_rtx_for_input[j];
@@ -7163,7 +7163,7 @@ emit_input_reload_insns (class insn_chain *chain, struct reload *rl,
   /* Encapsulate OLDEQUIV into the reload mode, then load RELOADREG from
      OLDEQUIV.  */
 
-  while (GET_CODE (oldequiv) == SUBREG && GET_MODE (oldequiv) != mode)
+  while (SUBREG_P (oldequiv) && GET_MODE (oldequiv) != mode)
     oldequiv = SUBREG_REG (oldequiv);
   if (GET_MODE (oldequiv) != VOIDmode
       && mode != GET_MODE (oldequiv))
@@ -7342,7 +7342,7 @@ emit_input_reload_insns (class insn_chain *chain, struct reload *rl,
 	 not in the right mode.  */
 
       tmp = oldequiv;
-      if (GET_CODE (tmp) == SUBREG)
+      if (SUBREG_P (tmp))
 	tmp = SUBREG_REG (tmp);
       if (REG_P (tmp)
 	  && REGNO (tmp) >= FIRST_PSEUDO_REGISTER
@@ -7351,14 +7351,14 @@ emit_input_reload_insns (class insn_chain *chain, struct reload *rl,
 	{
 	  if (! reg_equiv_mem (REGNO (tmp))
 	      || num_not_at_initial_offset
-	      || GET_CODE (oldequiv) == SUBREG)
+	      || SUBREG_P (oldequiv))
 	    real_oldequiv = rl->in;
 	  else
 	    real_oldequiv = reg_equiv_mem (REGNO (tmp));
 	}
 
       tmp = old;
-      if (GET_CODE (tmp) == SUBREG)
+      if (SUBREG_P (tmp))
 	tmp = SUBREG_REG (tmp);
       if (REG_P (tmp)
 	  && REGNO (tmp) >= FIRST_PSEUDO_REGISTER
@@ -7367,7 +7367,7 @@ emit_input_reload_insns (class insn_chain *chain, struct reload *rl,
 	{
 	  if (! reg_equiv_mem (REGNO (tmp))
 	      || num_not_at_initial_offset
-	      || GET_CODE (old) == SUBREG)
+	      || SUBREG_P (old))
 	    real_old = rl->in;
 	  else
 	    real_old = reg_equiv_mem (REGNO (tmp));
@@ -7542,7 +7542,7 @@ emit_input_reload_insns (class insn_chain *chain, struct reload *rl,
 	   && REGNO (oldequiv) >= FIRST_PSEUDO_REGISTER
 	   && (reg_equiv_memory_loc (REGNO (oldequiv)) != 0
 	       || reg_equiv_constant (REGNO (oldequiv)) != 0))
-	  || (GET_CODE (oldequiv) == SUBREG
+	  || (SUBREG_P (oldequiv)
 	      && REG_P (SUBREG_REG (oldequiv))
 	      && (REGNO (SUBREG_REG (oldequiv))
 		  >= FIRST_PSEUDO_REGISTER)
@@ -7949,7 +7949,7 @@ do_output_reload (class insn_chain *chain, struct reload *rl, int j)
       return;
     }
   /* Likewise for a SUBREG of an operand that dies.  */
-  else if (GET_CODE (old) == SUBREG
+  else if (SUBREG_P (old)
 	   && REG_P (SUBREG_REG (old))
 	   && (note = find_reg_note (insn, REG_UNUSED,
 				     SUBREG_REG (old))) != 0)
@@ -8098,7 +8098,7 @@ emit_reload_insns (class insn_chain *chain)
 	{
 	  rtx reg = rld[r].in_reg;
 
-	  if (GET_CODE (reg) == SUBREG)
+	  if (SUBREG_P (reg))
 	    reg = SUBREG_REG (reg);
 
 	  if (REG_P (reg)
@@ -8507,10 +8507,10 @@ gen_reload (rtx out, rtx in, int opnum, enum reload_type type)
 
   if (GET_CODE (in) == PLUS
       && (REG_P (XEXP (in, 0))
-	  || GET_CODE (XEXP (in, 0)) == SUBREG
+	  || SUBREG_P (XEXP (in, 0))
 	  || MEM_P (XEXP (in, 0)))
       && (REG_P (XEXP (in, 1))
-	  || GET_CODE (XEXP (in, 1)) == SUBREG
+	  || SUBREG_P (XEXP (in, 1))
 	  || CONSTANT_P (XEXP (in, 1))
 	  || MEM_P (XEXP (in, 1))))
     {
@@ -8569,7 +8569,7 @@ gen_reload (rtx out, rtx in, int opnum, enum reload_type type)
 
       code = optab_handler (add_optab, GET_MODE (out));
 
-      if (CONSTANT_P (op1) || MEM_P (op1) || GET_CODE (op1) == SUBREG
+      if (CONSTANT_P (op1) || MEM_P (op1) || SUBREG_P (op1)
 	  || (REG_P (op1)
 	      && REGNO (op1) >= FIRST_PSEUDO_REGISTER)
 	  || (code != CODE_FOR_nothing
@@ -8661,7 +8661,7 @@ gen_reload (rtx out, rtx in, int opnum, enum reload_type type)
       fatal_insn ("failure trying to reload:", set);
     }
   /* If IN is a simple operand, use gen_move_insn.  */
-  else if (OBJECT_P (in) || GET_CODE (in) == SUBREG)
+  else if (OBJECT_P (in) || SUBREG_P (in))
     {
       tem = emit_insn (gen_move_insn (out, in));
       /* IN may contain a LABEL_REF, if so add a REG_LABEL_OPERAND note.  */
@@ -8714,7 +8714,7 @@ delete_output_reload (rtx_insn *insn, int j, int last_reload_reg,
 
   /* Get the raw pseudo-register referred to.  */
 
-  while (GET_CODE (reg) == SUBREG)
+  while (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
   substed = reg_equiv_memory_loc (REGNO (reg));
 
@@ -8731,7 +8731,7 @@ delete_output_reload (rtx_insn *insn, int j, int last_reload_reg,
       if (AUTO_INC_DEC && rld[k].out && ! rld[k].out_reg)
 	reg2 = XEXP (rld[k].in_reg, 0);
 
-      while (GET_CODE (reg2) == SUBREG)
+      while (SUBREG_P (reg2))
 	reg2 = SUBREG_REG (reg2);
       if (rtx_equal_p (reg2, reg))
 	{
diff --git a/gcc/reorg.c b/gcc/reorg.c
index bdfcf8851cd..4ec3d7efea6 100644
--- a/gcc/reorg.c
+++ b/gcc/reorg.c
@@ -894,19 +894,19 @@ get_branch_condition (const rtx_insn *insn, rtx target)
     return 0;
 
   src = SET_SRC (pat);
-  if (GET_CODE (src) == LABEL_REF && label_ref_label (src) == target)
+  if (LABEL_REF_P (src) && label_ref_label (src) == target)
     return const_true_rtx;
 
   else if (GET_CODE (src) == IF_THEN_ELSE
 	   && XEXP (src, 2) == pc_rtx
-	   && ((GET_CODE (XEXP (src, 1)) == LABEL_REF
+	   && ((LABEL_REF_P (XEXP (src, 1))
 		&& label_ref_label (XEXP (src, 1)) == target)
 	       || (ANY_RETURN_P (XEXP (src, 1)) && XEXP (src, 1) == target)))
     return XEXP (src, 0);
 
   else if (GET_CODE (src) == IF_THEN_ELSE
 	   && XEXP (src, 1) == pc_rtx
-	   && ((GET_CODE (XEXP (src, 2)) == LABEL_REF
+	   && ((LABEL_REF_P (XEXP (src, 2))
 		&& label_ref_label (XEXP (src, 2)) == target)
 	       || (ANY_RETURN_P (XEXP (src, 2)) && XEXP (src, 2) == target)))
     {
@@ -1510,7 +1510,7 @@ redundant_insn (rtx insn, rtx_insn *target, const vec<rtx_insn *> &delay_list)
 	  || GET_CODE (pat) == CLOBBER_HIGH)
 	continue;
 
-      if (GET_CODE (trial) == DEBUG_INSN)
+      if (DEBUG_INSN_P (trial))
 	continue;
 
       if (rtx_sequence *seq = dyn_cast <rtx_sequence *> (pat))
@@ -1609,7 +1609,7 @@ redundant_insn (rtx insn, rtx_insn *target, const vec<rtx_insn *> &delay_list)
 	  || GET_CODE (pat) == CLOBBER_HIGH)
 	continue;
 
-      if (GET_CODE (trial) == DEBUG_INSN)
+      if (DEBUG_INSN_P (trial))
 	continue;
 
       if (rtx_sequence *seq = dyn_cast <rtx_sequence *> (pat))
@@ -2047,7 +2047,7 @@ fill_simple_delay_slots (int non_jumps_p)
 		continue;
 
 	      /* And DEBUG_INSNs never go into delay slots.  */
-	      if (GET_CODE (trial) == DEBUG_INSN)
+	      if (DEBUG_INSN_P (trial))
 		continue;
 
 	      /* Check for resource conflict first, to avoid unnecessary
@@ -2174,7 +2174,7 @@ fill_simple_delay_slots (int non_jumps_p)
 		continue;
 
 	      /* And DEBUG_INSNs do not go in delay slots.  */
-	      if (GET_CODE (trial) == DEBUG_INSN)
+	      if (DEBUG_INSN_P (trial))
 		continue;
 
 	      /* If this already has filled delay slots, get the insn needing
@@ -2442,7 +2442,7 @@ fill_slots_from_thread (rtx_jump_insn *insn, rtx condition,
 	  || GET_CODE (pat) == CLOBBER_HIGH)
 	continue;
 
-      if (GET_CODE (trial) == DEBUG_INSN)
+      if (DEBUG_INSN_P (trial))
 	continue;
 
       /* If TRIAL conflicts with the insns ahead of it, we lose.  Also,
diff --git a/gcc/resource.c b/gcc/resource.c
index c4bcfd7dc71..3c3d6a1c6e4 100644
--- a/gcc/resource.c
+++ b/gcc/resource.c
@@ -90,10 +90,10 @@ update_live_status (rtx dest, const_rtx x, void *data ATTRIBUTE_UNUSED)
   int i;
 
   if (!REG_P (dest)
-      && (GET_CODE (dest) != SUBREG || !REG_P (SUBREG_REG (dest))))
+      && (!SUBREG_P (dest) || !REG_P (SUBREG_REG (dest))))
     return;
 
-  if (GET_CODE (dest) == SUBREG)
+  if (SUBREG_P (dest))
     {
       first_regno = subreg_regno (dest);
       last_regno = first_regno + subreg_nregs (dest);
@@ -291,7 +291,7 @@ mark_referenced_resources (rtx x, struct resources *res,
       if (GET_CODE (x) == ZERO_EXTRACT
 	  || GET_CODE (x) == STRICT_LOW_PART)
 	mark_referenced_resources (x, res, false);
-      else if (GET_CODE (x) == SUBREG)
+      else if (SUBREG_P (x))
 	x = SUBREG_REG (x);
       if (MEM_P (x))
 	mark_referenced_resources (XEXP (x, 0), res, false);
diff --git a/gcc/rtl.c b/gcc/rtl.c
index d7b8e9877c3..3a678e46236 100644
--- a/gcc/rtl.c
+++ b/gcc/rtl.c
@@ -205,7 +205,7 @@ rtx_size (const_rtx x)
     return (RTX_HDR_SIZE
 	    + sizeof (struct const_poly_int_def)
 	    + CONST_POLY_INT_COEFFS (x).extra_size ());
-  if (GET_CODE (x) == SYMBOL_REF && SYMBOL_REF_HAS_BLOCK_INFO_P (x))
+  if (SYMBOL_REF_P (x) && SYMBOL_REF_HAS_BLOCK_INFO_P (x))
     return RTX_HDR_SIZE + sizeof (struct block_symbol);
   return RTX_CODE_SIZE (GET_CODE (x));
 }
@@ -272,7 +272,7 @@ shared_const_p (const_rtx orig)
      a LABEL_REF, it isn't sharable.  */
   poly_int64 offset;
   return (GET_CODE (XEXP (orig, 0)) == PLUS
-	  && GET_CODE (XEXP (XEXP (orig, 0), 0)) == SYMBOL_REF
+	  && SYMBOL_REF_P (XEXP (XEXP (orig, 0), 0))
 	  && poly_int_rtx_p (XEXP (XEXP (orig, 0), 1), &offset));
 }
 
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index 268a38799d6..3ce8c1bb03e 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -871,7 +871,7 @@ offset_within_block_p (const_rtx symbol, HOST_WIDE_INT offset)
 {
   tree decl;
 
-  if (GET_CODE (symbol) != SYMBOL_REF)
+  if (!SYMBOL_REF_P (symbol))
     return false;
 
   if (offset == 0)
@@ -1027,7 +1027,7 @@ unsigned_reg_p (rtx op)
       && TYPE_UNSIGNED (TREE_TYPE (REG_EXPR (op))))
     return true;
 
-  if (GET_CODE (op) == SUBREG
+  if (SUBREG_P (op)
       && SUBREG_PROMOTED_SIGN (op))
     return true;
 
@@ -1052,7 +1052,7 @@ reg_mentioned_p (const_rtx reg, const_rtx in)
   if (reg == in)
     return 1;
 
-  if (GET_CODE (in) == LABEL_REF)
+  if (LABEL_REF_P (in))
     return reg == label_ref_label (in);
 
   code = GET_CODE (in);
@@ -1155,7 +1155,7 @@ reg_referenced_p (const_rtx x, const_rtx body)
       if (GET_CODE (SET_DEST (body)) != CC0
 	  && GET_CODE (SET_DEST (body)) != PC
 	  && !REG_P (SET_DEST (body))
-	  && ! (GET_CODE (SET_DEST (body)) == SUBREG
+	  && ! (SUBREG_P (SET_DEST (body))
 		&& REG_P (SUBREG_REG (SET_DEST (body)))
 		&& !read_modify_subreg_p (SET_DEST (body)))
 	  && reg_overlap_mentioned_p (x, SET_DEST (body)))
@@ -1401,7 +1401,7 @@ modified_in_p (const_rtx x, const_rtx insn)
 bool
 read_modify_subreg_p (const_rtx x)
 {
-  if (GET_CODE (x) != SUBREG)
+  if (!SUBREG_P (x))
     return false;
   poly_uint64 isize = GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)));
   poly_uint64 osize = GET_MODE_SIZE (GET_MODE (x));
@@ -1606,7 +1606,7 @@ set_noop_p (const_rtx set)
   if (GET_CODE (dst) == STRICT_LOW_PART)
     dst = XEXP (dst, 0);
 
-  if (GET_CODE (src) == SUBREG && GET_CODE (dst) == SUBREG)
+  if (SUBREG_P (src) && SUBREG_P (dst))
     {
       if (maybe_ne (SUBREG_BYTE (src), SUBREG_BYTE (dst)))
 	return 0;
@@ -1748,7 +1748,7 @@ refers_to_regno_p (unsigned int regno, unsigned int endregno, const_rtx x,
 	  /* Note setting a SUBREG counts as referring to the REG it is in for
 	     a pseudo but not for hard registers since we can
 	     treat each word individually.  */
-	  && ((GET_CODE (SET_DEST (x)) == SUBREG
+	  && ((SUBREG_P (SET_DEST (x))
 	       && loc != &SUBREG_REG (SET_DEST (x))
 	       && REG_P (SUBREG_REG (SET_DEST (x)))
 	       && REGNO (SUBREG_REG (SET_DEST (x))) >= FIRST_PSEUDO_REGISTER
@@ -1912,7 +1912,7 @@ note_stores (const_rtx x, void (*fun) (rtx, const_rtx, void *), void *data)
     {
       rtx dest = SET_DEST (x);
 
-      while ((GET_CODE (dest) == SUBREG
+      while ((SUBREG_P (dest)
 	      && (!REG_P (SUBREG_REG (dest))
 		  || REGNO (SUBREG_REG (dest)) >= FIRST_PSEUDO_REGISTER))
 	     || GET_CODE (dest) == ZERO_EXTRACT
@@ -2010,7 +2010,7 @@ note_uses (rtx *pbody, void (*fun) (rtx *, void *), void *data)
 	    (*fun) (&XEXP (dest, 2), data);
 	  }
 
-	while (GET_CODE (dest) == SUBREG || GET_CODE (dest) == STRICT_LOW_PART)
+	while (SUBREG_P (dest) || GET_CODE (dest) == STRICT_LOW_PART)
 	  dest = XEXP (dest, 0);
 
 	if (MEM_P (dest))
@@ -2072,7 +2072,7 @@ covers_regno_no_parallel_p (const_rtx dest, unsigned int test_regno)
 {
   unsigned int regno, endregno;
 
-  if (GET_CODE (dest) == SUBREG && !read_modify_subreg_p (dest))
+  if (SUBREG_P (dest) && !read_modify_subreg_p (dest))
     dest = SUBREG_REG (dest);
 
   if (!REG_P (dest))
@@ -2850,7 +2850,7 @@ may_trap_p_1 (const_rtx x, unsigned flags)
 	return flag_trapping_math;
       if (!CONSTANT_P (XEXP (x, 1)) || (XEXP (x, 1) == const0_rtx))
 	return 1;
-      if (GET_CODE (XEXP (x, 1)) == CONST_VECTOR)
+      if (CONST_VECTOR_P (XEXP (x, 1)))
 	{
 	  /* For CONST_VECTOR, return 1 if any element is or might be zero.  */
 	  unsigned int n_elts;
@@ -3090,7 +3090,7 @@ replace_rtx (rtx x, rtx from, rtx to, bool all_regs)
       gcc_assert (GET_MODE (x) == GET_MODE (from));
       return to;
     }
-  else if (GET_CODE (x) == SUBREG)
+  else if (SUBREG_P (x))
     {
       rtx new_rtx = replace_rtx (SUBREG_REG (x), from, to, all_regs);
 
@@ -3177,7 +3177,7 @@ replace_label (rtx *loc, rtx old_label, rtx new_label, bool update_label_nuses)
       rtx *loc = *iter;
       if (rtx x = *loc)
 	{
-	  if (GET_CODE (x) == SYMBOL_REF
+	  if (SYMBOL_REF_P (x)
 	      && CONSTANT_POOL_ADDRESS_P (x))
 	    {
 	      rtx c = get_pool_constant (x);
@@ -3196,7 +3196,7 @@ replace_label (rtx *loc, rtx old_label, rtx new_label, bool update_label_nuses)
 		}
 	    }
 
-	  if ((GET_CODE (x) == LABEL_REF
+	  if ((LABEL_REF_P (x)
 	       || GET_CODE (x) == INSN_LIST)
 	      && XEXP (x, 0) == old_label)
 	    {
@@ -3230,7 +3230,7 @@ rtx_referenced_p (const_rtx x, const_rtx body)
     if (const_rtx y = *iter)
       {
 	/* Check if a label_ref Y refers to label X.  */
-	if (GET_CODE (y) == LABEL_REF
+	if (LABEL_REF_P (y)
 	    && LABEL_P (x)
 	    && label_ref_label (y) == x)
 	  return true;
@@ -3239,7 +3239,7 @@ rtx_referenced_p (const_rtx x, const_rtx body)
 	  return true;
 
 	/* If Y is a reference to pool constant traverse the constant.  */
-	if (GET_CODE (y) == SYMBOL_REF
+	if (SYMBOL_REF_P (y)
 	    && CONSTANT_POOL_ADDRESS_P (y))
 	  iter.substitute (get_pool_constant (y));
       }
@@ -3296,7 +3296,7 @@ computed_jump_p_1 (const_rtx x)
       return 1;
 
     case MEM:
-      return ! (GET_CODE (XEXP (x, 0)) == SYMBOL_REF
+      return ! (SYMBOL_REF_P (XEXP (x, 0))
 		&& CONSTANT_POOL_ADDRESS_P (XEXP (x, 0)));
 
     case IF_THEN_ELSE:
@@ -5804,7 +5804,7 @@ get_condition (rtx_insn *jump, rtx_insn **earliest, int allow_cc_mode,
   /* If this branches to JUMP_LABEL when the condition is false, reverse
      the condition.  */
   reverse
-    = GET_CODE (XEXP (SET_SRC (set), 2)) == LABEL_REF
+    = LABEL_REF_P (XEXP (SET_SRC (set), 2))
       && label_ref_label (XEXP (SET_SRC (set), 2)) == JUMP_LABEL (jump);
 
   return canonicalize_condition (jump, cond, reverse, earliest, NULL_RTX,
@@ -6051,7 +6051,7 @@ split_double (rtx value, rtx *first, rtx *second)
 	    }
 	}
     }
-  else if (GET_CODE (value) == CONST_WIDE_INT)
+  else if (CONST_WIDE_INT_P (value))
     {
       /* All of this is scary code and needs to be converted to
 	 properly work with any size integer.  */
@@ -6208,7 +6208,7 @@ get_base_term (rtx *inner)
     inner = strip_address_mutations (&XEXP (*inner, 0));
   if (REG_P (*inner)
       || MEM_P (*inner)
-      || GET_CODE (*inner) == SUBREG
+      || SUBREG_P (*inner)
       || GET_CODE (*inner) == SCRATCH)
     return inner;
   return 0;
@@ -6225,7 +6225,7 @@ get_index_term (rtx *inner)
     inner = strip_address_mutations (&XEXP (*inner, 0));
   if (REG_P (*inner)
       || MEM_P (*inner)
-      || GET_CODE (*inner) == SUBREG
+      || SUBREG_P (*inner)
       || GET_CODE (*inner) == SCRATCH)
     return inner;
   return 0;
@@ -6567,7 +6567,7 @@ contains_symbolic_reference_p (const_rtx x)
 {
   subrtx_iterator::array_type array;
   FOR_EACH_SUBRTX (iter, array, x, ALL)
-    if (SYMBOL_REF_P (*iter) || GET_CODE (*iter) == LABEL_REF)
+    if (SYMBOL_REF_P (*iter) || LABEL_REF_P (*iter))
       return true;
 
   return false;
@@ -6597,7 +6597,7 @@ tls_referenced_p (const_rtx x)
 
   subrtx_iterator::array_type array;
   FOR_EACH_SUBRTX (iter, array, x, ALL)
-    if (GET_CODE (*iter) == SYMBOL_REF && SYMBOL_REF_TLS_MODEL (*iter) != 0)
+    if (SYMBOL_REF_P (*iter) && SYMBOL_REF_TLS_MODEL (*iter) != 0)
       return true;
   return false;
 }
diff --git a/gcc/rtlhooks.c b/gcc/rtlhooks.c
index 0ce3d1ec637..d410cd55094 100644
--- a/gcc/rtlhooks.c
+++ b/gcc/rtlhooks.c
@@ -51,7 +51,7 @@ gen_lowpart_general (machine_mode mode, rtx x)
     return result;
   /* Handle SUBREGs and hard REGs that were rejected by
      simplify_gen_subreg.  */
-  else if (REG_P (x) || GET_CODE (x) == SUBREG)
+  else if (REG_P (x) || SUBREG_P (x))
     {
       result = gen_lowpart_common (mode, copy_to_reg (x));
       gcc_assert (result != 0);
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index 5cb4a462ce9..2d5073dd79a 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -2089,7 +2089,7 @@ mark_insn_reg_birth (rtx insn, rtx reg, bool clobber_p, bool unused_p)
 {
   int regno;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
 
   if (! REG_P (reg))
@@ -2148,7 +2148,7 @@ mark_reg_death (rtx reg)
 {
   int regno;
 
-  if (GET_CODE (reg) == SUBREG)
+  if (SUBREG_P (reg))
     reg = SUBREG_REG (reg);
 
   if (! REG_P (reg))
@@ -2423,7 +2423,7 @@ sched_analyze_1 (class deps_desc *deps, rtx x, rtx_insn *insn)
       return;
     }
 
-  while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
+  while (GET_CODE (dest) == STRICT_LOW_PART || SUBREG_P (dest)
 	 || GET_CODE (dest) == ZERO_EXTRACT)
     {
       if (GET_CODE (dest) == STRICT_LOW_PART
@@ -3457,7 +3457,7 @@ sched_analyze_insn (class deps_desc *deps, rtx x, rtx_insn *insn)
 	}
 
       tmp = SET_DEST (set);
-      if (GET_CODE (tmp) == SUBREG)
+      if (SUBREG_P (tmp))
 	tmp = SUBREG_REG (tmp);
       if (REG_P (tmp))
 	dest_regno = REGNO (tmp);
@@ -3465,7 +3465,7 @@ sched_analyze_insn (class deps_desc *deps, rtx x, rtx_insn *insn)
 	goto end_call_group;
 
       tmp = SET_SRC (set);
-      if (GET_CODE (tmp) == SUBREG)
+      if (SUBREG_P (tmp))
 	tmp = SUBREG_REG (tmp);
       if ((GET_CODE (tmp) == PLUS
 	   || GET_CODE (tmp) == MINUS)
@@ -3569,7 +3569,7 @@ call_may_noreturn_p (rtx_insn *insn)
     return false;
 
   call = get_call_rtx_from (insn);
-  if (call && GET_CODE (XEXP (XEXP (call, 0), 0)) == SYMBOL_REF)
+  if (call && SYMBOL_REF_P (XEXP (XEXP (call, 0), 0)))
     {
       rtx symbol = XEXP (XEXP (call, 0), 0);
       if (SYMBOL_REF_DECL (symbol)
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index 59ee6a0a57c..f30b8d3f94c 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -1672,7 +1672,7 @@ check_live_1 (int src, rtx x)
   if (reg == 0)
     return 1;
 
-  while (GET_CODE (reg) == SUBREG
+  while (SUBREG_P (reg)
 	 || GET_CODE (reg) == ZERO_EXTRACT
 	 || GET_CODE (reg) == STRICT_LOW_PART)
     reg = XEXP (reg, 0);
@@ -1755,7 +1755,7 @@ update_live_1 (int src, rtx x)
   if (reg == 0)
     return;
 
-  while (GET_CODE (reg) == SUBREG
+  while (SUBREG_P (reg)
 	 || GET_CODE (reg) == ZERO_EXTRACT
 	 || GET_CODE (reg) == STRICT_LOW_PART)
     reg = XEXP (reg, 0);
diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c
index f127ff74599..e99c9002471 100644
--- a/gcc/sel-sched.c
+++ b/gcc/sel-sched.c
@@ -811,7 +811,7 @@ count_occurrences_equiv (const_rtx what, const_rtx where)
 	    return 0;
 	  count += 1;
 	}
-      else if (GET_CODE (x) == SUBREG
+      else if (SUBREG_P (x)
 	       && (!REG_P (SUBREG_REG (x))
 		   || REGNO (SUBREG_REG (x)) == REGNO (what)))
 	/* ??? Do not support substituting regs inside subregs.  In that case,
diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c
index 9359a3cdb4d..2d3ec52bd93 100644
--- a/gcc/simplify-rtx.c
+++ b/gcc/simplify-rtx.c
@@ -247,7 +247,7 @@ avoid_constant_pool_reference (rtx x)
 
   /* If this is a constant pool reference, we can turn it into its
      constant and hope that simplifications happen.  */
-  if (GET_CODE (addr) == SYMBOL_REF
+  if (SYMBOL_REF_P (addr)
       && CONSTANT_POOL_ADDRESS_P (addr))
     {
       c = get_pool_constant (addr);
@@ -826,7 +826,7 @@ simplify_truncation (machine_mode mode, rtx op,
 
   /* (truncate:A (subreg:B (truncate:C X) 0)) is
      (truncate:A X).  */
-  if (GET_CODE (op) == SUBREG
+  if (SUBREG_P (op)
       && is_a <scalar_int_mode> (mode, &int_mode)
       && SCALAR_INT_MODE_P (op_mode)
       && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (op)), &subreg_mode)
@@ -1242,7 +1242,7 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
 
       /* A truncate of a memory is just loading the low part of the memory
 	 if we are not changing the meaning of the address. */
-      if (GET_CODE (op) == MEM
+      if (MEM_P (op)
 	  && !VECTOR_MODE_P (mode)
 	  && !MEM_VOLATILE_P (op)
 	  && !mode_dependent_address_p (XEXP (op, 0), MEM_ADDR_SPACE (op)))
@@ -1300,7 +1300,7 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
 
       /* (float_truncate:SF (subreg:DF (float_truncate:SF X) 0))
 	 is (float_truncate:SF x).  */
-      if (GET_CODE (op) == SUBREG
+      if (SUBREG_P (op)
 	  && subreg_lowpart_p (op)
 	  && GET_CODE (SUBREG_REG (op)) == FLOAT_TRUNCATE)
 	return SUBREG_REG (op);
@@ -1423,8 +1423,8 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
       if (GET_CODE (op) == TRUNCATE
 	  && GET_MODE (XEXP (op, 0)) == mode
 	  && GET_CODE (XEXP (op, 0)) == MINUS
-	  && GET_CODE (XEXP (XEXP (op, 0), 0)) == LABEL_REF
-	  && GET_CODE (XEXP (XEXP (op, 0), 1)) == LABEL_REF)
+	  && LABEL_REF_P (XEXP (XEXP (op, 0), 0))
+	  && LABEL_REF_P (XEXP (XEXP (op, 0), 1)))
 	return XEXP (op, 0);
 
       /* Extending a widening multiplication should be canonicalized to
@@ -1474,7 +1474,7 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
       /* Check for a sign extension of a subreg of a promoted
 	 variable, where the promotion is sign-extended, and the
 	 target mode is the same as the variable's promotion.  */
-      if (GET_CODE (op) == SUBREG
+      if (SUBREG_P (op)
 	  && SUBREG_PROMOTED_VAR_P (op)
 	  && SUBREG_PROMOTED_SIGNED_P (op)
 	  && !paradoxical_subreg_p (mode, GET_MODE (SUBREG_REG (op))))
@@ -1537,7 +1537,7 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
 	  && ! POINTERS_EXTEND_UNSIGNED
 	  && mode == Pmode && GET_MODE (op) == ptr_mode
 	  && (CONSTANT_P (op)
-	      || (GET_CODE (op) == SUBREG
+	      || (SUBREG_P (op)
 		  && REG_P (SUBREG_REG (op))
 		  && REG_POINTER (SUBREG_REG (op))
 		  && GET_MODE (SUBREG_REG (op)) == Pmode))
@@ -1557,7 +1557,7 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
       /* Check for a zero extension of a subreg of a promoted
 	 variable, where the promotion is zero-extended, and the
 	 target mode is the same as the variable's promotion.  */
-      if (GET_CODE (op) == SUBREG
+      if (SUBREG_P (op)
 	  && SUBREG_PROMOTED_VAR_P (op)
 	  && SUBREG_PROMOTED_UNSIGNED_P (op)
 	  && !paradoxical_subreg_p (mode, GET_MODE (SUBREG_REG (op))))
@@ -1667,7 +1667,7 @@ simplify_unary_operation_1 (enum rtx_code code, machine_mode mode, rtx op)
 	  && POINTERS_EXTEND_UNSIGNED > 0
 	  && mode == Pmode && GET_MODE (op) == ptr_mode
 	  && (CONSTANT_P (op)
-	      || (GET_CODE (op) == SUBREG
+	      || (SUBREG_P (op)
 		  && REG_P (SUBREG_REG (op))
 		  && REG_POINTER (SUBREG_REG (op))
 		  && GET_MODE (SUBREG_REG (op)) == Pmode))
@@ -1736,7 +1736,7 @@ simplify_const_unary_operation (enum rtx_code code, machine_mode mode,
       }
       if (CONST_SCALAR_INT_P (op) || CONST_DOUBLE_AS_FLOAT_P (op))
 	return gen_const_vec_duplicate (mode, op);
-      if (GET_CODE (op) == CONST_VECTOR
+      if (CONST_VECTOR_P (op)
 	  && (CONST_VECTOR_DUPLICATE_P (op)
 	      || CONST_VECTOR_NUNITS (op).is_constant ()))
 	{
@@ -1752,7 +1752,7 @@ simplify_const_unary_operation (enum rtx_code code, machine_mode mode,
     }
 
   if (VECTOR_MODE_P (mode)
-      && GET_CODE (op) == CONST_VECTOR
+      && CONST_VECTOR_P (op)
       && known_eq (GET_MODE_NUNITS (mode), CONST_VECTOR_NUNITS (op)))
     {
       gcc_assert (GET_MODE (op) == op_mode);
@@ -2262,13 +2262,13 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
 	 than HOST_BITS_PER_WIDE_INT.  */
 
       if ((GET_CODE (op0) == CONST
-	   || GET_CODE (op0) == SYMBOL_REF
-	   || GET_CODE (op0) == LABEL_REF)
+	   || SYMBOL_REF_P (op0)
+	   || LABEL_REF_P (op0))
 	  && poly_int_rtx_p (op1, &offset))
 	return plus_constant (mode, op0, offset);
       else if ((GET_CODE (op1) == CONST
-		|| GET_CODE (op1) == SYMBOL_REF
-		|| GET_CODE (op1) == LABEL_REF)
+		|| SYMBOL_REF_P (op1)
+		|| LABEL_REF_P (op1))
 	       && poly_int_rtx_p (op0, &offset))
 	return plus_constant (mode, op1, offset);
 
@@ -2541,8 +2541,8 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
 	}
 
       if ((GET_CODE (op0) == CONST
-	   || GET_CODE (op0) == SYMBOL_REF
-	   || GET_CODE (op0) == LABEL_REF)
+	   || SYMBOL_REF_P (op0)
+	   || LABEL_REF_P (op0))
 	  && poly_int_rtx_p (op1, &offset))
 	return plus_constant (mode, op0, trunc_int_for_mode (-offset, mode));
 
@@ -2791,7 +2791,7 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
          mode size to (rotate A CX).  */
 
       if (GET_CODE (op1) == ASHIFT
-          || GET_CODE (op1) == SUBREG)
+          || SUBREG_P (op1))
         {
 	  opleft = op1;
 	  opright = op0;
@@ -2813,13 +2813,13 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
       /* Same, but for ashift that has been "simplified" to a wider mode
         by simplify_shift_const.  */
 
-      if (GET_CODE (opleft) == SUBREG
+      if (SUBREG_P (opleft)
 	  && is_a <scalar_int_mode> (mode, &int_mode)
 	  && is_a <scalar_int_mode> (GET_MODE (SUBREG_REG (opleft)),
 				     &inner_mode)
           && GET_CODE (SUBREG_REG (opleft)) == ASHIFT
           && GET_CODE (opright) == LSHIFTRT
-          && GET_CODE (XEXP (opright, 0)) == SUBREG
+          && SUBREG_P (XEXP (opright, 0))
 	  && known_eq (SUBREG_BYTE (opleft), SUBREG_BYTE (XEXP (opright, 0)))
 	  && GET_MODE_SIZE (int_mode) < GET_MODE_SIZE (inner_mode)
           && rtx_equal_p (XEXP (SUBREG_REG (opleft), 0),
@@ -3642,7 +3642,7 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
 	  if (vec_duplicate_p (trueop0, &elt0))
 	    return elt0;
 
-	  if (GET_CODE (trueop0) == CONST_VECTOR)
+	  if (CONST_VECTOR_P (trueop0))
 	    return CONST_VECTOR_ELT (trueop0, INTVAL (XVECEXP
 						      (trueop1, 0, 0)));
 
@@ -3725,7 +3725,7 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
 	       because they are all the same.  */
 	    return gen_vec_duplicate (mode, elt0);
 
-	  if (GET_CODE (trueop0) == CONST_VECTOR)
+	  if (CONST_VECTOR_P (trueop0))
 	    {
 	      unsigned n_elts = XVECLEN (trueop1, 0);
 	      rtvec v = rtvec_alloc (n_elts);
@@ -3968,10 +3968,10 @@ simplify_binary_operation_1 (enum rtx_code code, machine_mode mode,
 	  gcc_assert (GET_MODE_INNER (mode) == op1_mode);
 
 	unsigned int n_elts, in_n_elts;
-	if ((GET_CODE (trueop0) == CONST_VECTOR
+	if ((CONST_VECTOR_P (trueop0)
 	     || CONST_SCALAR_INT_P (trueop0) 
 	     || CONST_DOUBLE_AS_FLOAT_P (trueop0))
-	    && (GET_CODE (trueop1) == CONST_VECTOR
+	    && (CONST_VECTOR_P (trueop1)
 		|| CONST_SCALAR_INT_P (trueop1) 
 		|| CONST_DOUBLE_AS_FLOAT_P (trueop1))
 	    && GET_MODE_NUNITS (mode).is_constant (&n_elts)
@@ -4084,8 +4084,8 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
 {
   if (VECTOR_MODE_P (mode)
       && code != VEC_CONCAT
-      && GET_CODE (op0) == CONST_VECTOR
-      && GET_CODE (op1) == CONST_VECTOR)
+      && CONST_VECTOR_P (op0)
+      && CONST_VECTOR_P (op1))
     {
       bool step_ok_p;
       if (CONST_VECTOR_STEPPED_P (op0)
@@ -4145,8 +4145,8 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
       gcc_assert (n_elts >= 2);
       if (n_elts == 2)
 	{
-	  gcc_assert (GET_CODE (op0) != CONST_VECTOR);
-	  gcc_assert (GET_CODE (op1) != CONST_VECTOR);
+	  gcc_assert (!CONST_VECTOR_P (op0));
+	  gcc_assert (!CONST_VECTOR_P (op1));
 
 	  RTVEC_ELT (v, 0) = op0;
 	  RTVEC_ELT (v, 1) = op1;
@@ -4157,8 +4157,8 @@ simplify_const_binary_operation (enum rtx_code code, machine_mode mode,
 	  unsigned op1_n_elts = GET_MODE_NUNITS (GET_MODE (op1)).to_constant ();
 	  unsigned i;
 
-	  gcc_assert (GET_CODE (op0) == CONST_VECTOR);
-	  gcc_assert (GET_CODE (op1) == CONST_VECTOR);
+	  gcc_assert (CONST_VECTOR_P (op0));
+	  gcc_assert (CONST_VECTOR_P (op1));
 	  gcc_assert (op0_n_elts + op1_n_elts == n_elts);
 
 	  for (i = 0; i < op0_n_elts; ++i)
@@ -5833,7 +5833,7 @@ simplify_ternary_operation (enum rtx_code code, machine_mode mode,
       if (VECTOR_MODE_P (GET_MODE (op1))
 	  && GET_CODE (op0) == NE
 	  && GET_CODE (XEXP (op0, 0)) == NOT
-	  && GET_CODE (XEXP (op0, 1)) == CONST_VECTOR)
+	  && CONST_VECTOR_P (XEXP (op0, 1)))
 	{
 	  rtx cv = XEXP (op0, 1);
 	  int nunits;
@@ -5937,8 +5937,8 @@ simplify_ternary_operation (enum rtx_code code, machine_mode mode,
 
 	  rtx trueop0 = avoid_constant_pool_reference (op0);
 	  rtx trueop1 = avoid_constant_pool_reference (op1);
-	  if (GET_CODE (trueop0) == CONST_VECTOR
-	      && GET_CODE (trueop1) == CONST_VECTOR)
+	  if (CONST_VECTOR_P (trueop0)
+	      && CONST_VECTOR_P (trueop1))
 	    {
 	      rtvec v = rtvec_alloc (n_elts);
 	      unsigned int i;
@@ -6001,7 +6001,7 @@ simplify_ternary_operation (enum rtx_code code, machine_mode mode,
 	     with (vec_concat (X) (B)) if N == 1 or
 	     (vec_concat (A) (X)) if N == 2.  */
 	  if (GET_CODE (op0) == VEC_DUPLICATE
-	      && GET_CODE (op1) == CONST_VECTOR
+	      && CONST_VECTOR_P (op1)
 	      && known_eq (CONST_VECTOR_NUNITS (op1), 2)
 	      && known_eq (GET_MODE_NUNITS (GET_MODE (op0)), 2)
 	      && IN_RANGE (sel, 1, 2))
@@ -6046,7 +6046,7 @@ simplify_ternary_operation (enum rtx_code code, machine_mode mode,
 
 	     Only applies for vectors of two elements.  */
 	  if (GET_CODE (op0) == VEC_DUPLICATE
-	      && GET_CODE (op1) == SUBREG
+	      && SUBREG_P (op1)
 	      && GET_MODE (op1) == GET_MODE (op0)
 	      && GET_MODE (SUBREG_REG (op1)) == GET_MODE (XEXP (op0, 0))
 	      && paradoxical_subreg_p (op1)
@@ -6070,7 +6070,7 @@ simplify_ternary_operation (enum rtx_code code, machine_mode mode,
 	     with (vec_concat:outer x:inner y:inner) if N == 1,
 	     or (vec_concat:outer y:inner x:inner) if N == 2.  */
 	  if (GET_CODE (op1) == VEC_DUPLICATE
-	      && GET_CODE (op0) == SUBREG
+	      && SUBREG_P (op0)
 	      && GET_MODE (op0) == GET_MODE (op1)
 	      && GET_MODE (SUBREG_REG (op0)) == GET_MODE (XEXP (op1, 0))
 	      && paradoxical_subreg_p (op0)
@@ -6177,7 +6177,7 @@ simplify_immed_subreg (fixed_size_mode outermode, rtx op,
 
   /* Unpack the value.  */
 
-  if (GET_CODE (op) == CONST_VECTOR)
+  if (CONST_VECTOR_P (op))
     {
       num_elem = CEIL (inner_bytes, GET_MODE_UNIT_SIZE (innermode));
       elem_bitsize = GET_MODE_UNIT_BITSIZE (innermode);
@@ -6195,7 +6195,7 @@ simplify_immed_subreg (fixed_size_mode outermode, rtx op,
   for (elem = 0; elem < num_elem; elem++)
     {
       unsigned char * vp;
-      rtx el = (GET_CODE (op) == CONST_VECTOR
+      rtx el = (CONST_VECTOR_P (op)
 		? CONST_VECTOR_ELT (op, first_elem + elem)
 		: op);
 
@@ -6511,7 +6511,7 @@ simplify_subreg (machine_mode outermode, rtx op,
   if (CONST_SCALAR_INT_P (op)
       || CONST_DOUBLE_AS_FLOAT_P (op)
       || CONST_FIXED_P (op)
-      || GET_CODE (op) == CONST_VECTOR)
+      || CONST_VECTOR_P (op))
     {
       /* simplify_immed_subreg deconstructs OP into bytes and constructs
 	 the result from bytes, so it only works if the sizes of the modes
@@ -6528,7 +6528,7 @@ simplify_subreg (machine_mode outermode, rtx op,
 
       /* Handle constant-sized outer modes and variable-sized inner modes.  */
       unsigned HOST_WIDE_INT first_elem;
-      if (GET_CODE (op) == CONST_VECTOR
+      if (CONST_VECTOR_P (op)
 	  && is_a <fixed_size_mode> (outermode, &fs_outermode)
 	  && constant_multiple_p (byte, GET_MODE_UNIT_SIZE (innermode),
 				  &first_elem))
@@ -6541,7 +6541,7 @@ simplify_subreg (machine_mode outermode, rtx op,
 
   /* Changing mode twice with SUBREG => just change it once,
      or not at all if changing back op starting mode.  */
-  if (GET_CODE (op) == SUBREG)
+  if (SUBREG_P (op))
     {
       machine_mode innermostmode = GET_MODE (SUBREG_REG (op));
       poly_uint64 innermostsize = GET_MODE_SIZE (innermostmode);
@@ -6761,7 +6761,7 @@ simplify_gen_subreg (machine_mode outermode, rtx op,
   if (newx)
     return newx;
 
-  if (GET_CODE (op) == SUBREG
+  if (SUBREG_P (op)
       || GET_CODE (op) == CONCAT
       || GET_MODE (op) == VOIDmode)
     return NULL_RTX;
diff --git a/gcc/symtab.c b/gcc/symtab.c
index 63e2820eb93..dbd2221db27 100644
--- a/gcc/symtab.c
+++ b/gcc/symtab.c
@@ -1377,7 +1377,7 @@ symtab_node::make_decl_local (void)
     return;
 
   symbol = XEXP (rtl, 0);
-  if (GET_CODE (symbol) != SYMBOL_REF)
+  if (!SYMBOL_REF_P (symbol))
     return;
 
   SYMBOL_REF_WEAK (symbol) = DECL_WEAK (decl);
@@ -1432,7 +1432,7 @@ symtab_node::copy_visibility_from (symtab_node *n)
     return;
 
   rtx symbol = XEXP (rtl, 0);
-  if (GET_CODE (symbol) != SYMBOL_REF)
+  if (!SYMBOL_REF_P (symbol))
     return;
 
   SYMBOL_REF_WEAK (symbol) = DECL_WEAK (decl);
diff --git a/gcc/tree-ssa-address.c b/gcc/tree-ssa-address.c
index 8004951d2e8..8432a98bd47 100644
--- a/gcc/tree-ssa-address.c
+++ b/gcc/tree-ssa-address.c
@@ -150,8 +150,8 @@ gen_addr_rtx (machine_mode address_mode,
 	  if (offset_p)
 	    *offset_p = &XEXP (act_elem, 1);
 
-	  if (GET_CODE (symbol) == SYMBOL_REF
-	      || GET_CODE (symbol) == LABEL_REF
+	  if (SYMBOL_REF_P (symbol)
+	      || LABEL_REF_P (symbol)
 	      || GET_CODE (symbol) == CONST)
 	    act_elem = gen_rtx_CONST (address_mode, act_elem);
 	}
@@ -264,13 +264,13 @@ addr_for_mem_ref (struct mem_address *addr, addr_space_t as,
      into OFF and clear BSE.  Otherwise we may later try to pull a mode from
      BSE to generate a REG, which won't work with constants because they
      are modeless.  */
-  if (bse && GET_CODE (bse) == CONST_INT)
+  if (bse && CONST_INT_P (bse))
     {
       if (off)
 	off = simplify_gen_binary (PLUS, pointer_mode, bse, off);
       else
 	off = bse;
-      gcc_assert (GET_CODE (off) == CONST_INT);
+      gcc_assert (CONST_INT_P (off));
       bse = NULL_RTX;
     }
   gen_addr_rtx (pointer_mode, sym, bse, idx, st, off, &address, NULL, NULL);
diff --git a/gcc/valtrack.c b/gcc/valtrack.c
index 1f67378a867..19b0b40c2b6 100644
--- a/gcc/valtrack.c
+++ b/gcc/valtrack.c
@@ -320,7 +320,7 @@ dead_debug_global_replace_temp (struct dead_debug_global *global,
 
   dead_debug_global_entry *entry
     = dead_debug_global_find (global, *DF_REF_REAL_LOC (use));
-  gcc_checking_assert (GET_CODE (entry->reg) == REG
+  gcc_checking_assert (REG_P (entry->reg)
 		       && REGNO (entry->reg) == uregno);
 
   if (!entry->dtemp)
@@ -420,7 +420,7 @@ dead_debug_promote_uses (struct dead_debug_local *debug)
       df_ref ref;
       dead_debug_global_entry *entry;
 
-      if (GET_CODE (reg) != REG
+      if (!REG_P (reg)
 	  || REGNO (reg) < FIRST_PSEUDO_REGISTER)
 	{
 	  headp = &head->next;
@@ -691,7 +691,7 @@ dead_debug_insert_temp (struct dead_debug_local *debug, unsigned int uregno,
 					 cleanup_auto_inc_dec (src, VOIDmode),
 					 GET_MODE (dest));
 	}
-      else if (GET_CODE (dest) == SUBREG)
+      else if (SUBREG_P (dest))
 	{
 	  /* We should be setting REG here.  Lose.  */
 	  if (REGNO (SUBREG_REG (dest)) != REGNO (reg))
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 67f25c1c795..d978e62cf1a 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -975,7 +975,7 @@ use_narrower_mode_test (rtx x, const_rtx subreg)
 		    < GET_MODE_PRECISION (as_a <scalar_int_mode> (op1_mode)))
 		  {
 		    poly_uint64 byte = subreg_lowpart_offset (mode, op1_mode);
-		    if (GET_CODE (op1) == SUBREG || GET_CODE (op1) == CONCAT)
+		    if (SUBREG_P (op1) || GET_CODE (op1) == CONCAT)
 		      {
 			if (!simplify_subreg (mode, op1, op1_mode, byte))
 			  return false;
@@ -1142,7 +1142,7 @@ adjust_mems (rtx loc, const_rtx old_rtx, void *data)
 	tem = gen_rtx_raw_SUBREG (GET_MODE (loc), addr, SUBREG_BYTE (loc));
     finish_subreg:
       if (MAY_HAVE_DEBUG_BIND_INSNS
-	  && GET_CODE (tem) == SUBREG
+	  && SUBREG_P (tem)
 	  && (GET_CODE (SUBREG_REG (tem)) == PLUS
 	      || GET_CODE (SUBREG_REG (tem)) == MINUS
 	      || GET_CODE (SUBREG_REG (tem)) == MULT
@@ -2043,7 +2043,7 @@ vt_get_canonicalize_base (rtx loc)
 {
   while ((GET_CODE (loc) == PLUS
 	  || GET_CODE (loc) == AND)
-	 && GET_CODE (XEXP (loc, 1)) == CONST_INT
+	 && CONST_INT_P (XEXP (loc, 1))
 	 && (GET_CODE (loc) != AND
 	     || negative_power_of_two_p (INTVAL (XEXP (loc, 1)))))
     loc = XEXP (loc, 0);
@@ -2195,7 +2195,7 @@ vt_canonicalize_addr (dataflow_set *set, rtx oloc)
 	 canonicalize the base and we're done.  We'll normally have
 	 only one stack alignment anyway.  */
       if (GET_CODE (loc) == AND
-	  && GET_CODE (XEXP (loc, 1)) == CONST_INT
+	  && CONST_INT_P (XEXP (loc, 1))
 	  && negative_power_of_two_p (INTVAL (XEXP (loc, 1))))
 	{
 	  x = vt_canonicalize_addr (set, XEXP (loc, 0));
@@ -2250,7 +2250,7 @@ vt_canonicalize_addr (dataflow_set *set, rtx oloc)
 static inline bool
 vt_canon_true_dep (dataflow_set *set, rtx mloc, rtx maddr, rtx loc)
 {
-  if (GET_CODE (loc) != MEM)
+  if (!MEM_P (loc))
     return false;
 
   rtx addr = vt_canonicalize_addr (set, XEXP (loc, 0));
@@ -2350,7 +2350,7 @@ clobber_overlapping_mems (dataflow_set *set, rtx loc)
 {
   struct overlapping_mems coms;
 
-  gcc_checking_assert (GET_CODE (loc) == MEM);
+  gcc_checking_assert (MEM_P (loc));
 
   coms.set = set;
   coms.loc = canon_rtx (loc);
@@ -2479,7 +2479,7 @@ val_bind (dataflow_set *set, rtx val, rtx loc, bool modified)
 	 dynamic tables.  ??? We should test this before emitting the
 	 micro-op in the first place.  */
       while (l)
-	if (GET_CODE (l->loc) == MEM && XEXP (l->loc, 0) == XEXP (loc, 0))
+	if (MEM_P (l->loc) && XEXP (l->loc, 0) == XEXP (loc, 0))
 	  break;
 	else
 	  l = l->next;
@@ -2609,10 +2609,10 @@ val_reset (dataflow_set *set, decl_or_value dv)
 	{
 	  if (node->loc == cval)
 	    continue;
-	  else if (GET_CODE (node->loc) == REG)
+	  else if (REG_P (node->loc))
 	    var_reg_decl_set (set, node->loc, node->init, cdv, 0,
 			      node->set_src, NO_INSERT);
-	  else if (GET_CODE (node->loc) == MEM)
+	  else if (MEM_P (node->loc))
 	    var_mem_decl_set (set, node->loc, node->init, cdv, 0,
 			      node->set_src, NO_INSERT);
 	  else
@@ -3814,7 +3814,7 @@ canonicalize_values_star (variable **slot, dataflow_set *set)
 		 parent.  */
 	      clobber_variable_part (set, cval, ndv, 0, NULL);
 	  }
-	else if (GET_CODE (node->loc) == REG)
+	else if (REG_P (node->loc))
 	  {
 	    attrs *list = set->regs[REGNO (node->loc)], **listp;
 
@@ -4072,7 +4072,7 @@ variable_merge_over_cur (variable *s1var, struct dfset_merge *dsm)
     {
       location_chain **nextp = &node->next;
 
-      if (GET_CODE (node->loc) == REG)
+      if (REG_P (node->loc))
 	{
 	  attrs *list;
 
@@ -4444,7 +4444,7 @@ variable_post_merge_new_vals (variable **slot, dfset_post_merge *dfpm)
 	{
 	  if (GET_CODE (node->loc) == VALUE)
 	    gcc_assert (!VALUE_RECURSED_INTO (node->loc));
-	  else if (GET_CODE (node->loc) == REG)
+	  else if (REG_P (node->loc))
 	    {
 	      attrs *att, **attp, **curp = NULL;
 
@@ -4735,7 +4735,7 @@ dataflow_set_preserve_mem_locs (variable **slot, dataflow_set *set)
 	  for (loc = var->var_part[0].loc_chain; loc; loc = loc->next)
 	    {
 	      /* We want to remove dying MEMs that don't refer to DECL.  */
-	      if (GET_CODE (loc->loc) == MEM
+	      if (MEM_P (loc->loc)
 		  && (MEM_EXPR (loc->loc) != decl
 		      || int_mem_offset (loc->loc) != 0)
 		  && mem_dies_at_call (loc->loc))
@@ -4779,7 +4779,7 @@ dataflow_set_preserve_mem_locs (variable **slot, dataflow_set *set)
 		}
 	    }
 
-	  if (GET_CODE (loc->loc) != MEM
+	  if (!MEM_P (loc->loc)
 	      || (MEM_EXPR (loc->loc) == decl
 		  && int_mem_offset (loc->loc) == 0)
 	      || !mem_dies_at_call (loc->loc))
@@ -4839,7 +4839,7 @@ dataflow_set_remove_mem_locs (variable **slot, dataflow_set *set)
       if (shared_var_p (var, set->vars))
 	{
 	  for (loc = var->var_part[0].loc_chain; loc; loc = loc->next)
-	    if (GET_CODE (loc->loc) == MEM
+	    if (MEM_P (loc->loc)
 		&& mem_dies_at_call (loc->loc))
 	      break;
 
@@ -4859,7 +4859,7 @@ dataflow_set_remove_mem_locs (variable **slot, dataflow_set *set)
       for (locp = &var->var_part[0].loc_chain, loc = *locp;
 	   loc; loc = *locp)
 	{
-	  if (GET_CODE (loc->loc) != MEM
+	  if (!MEM_P (loc->loc)
 	      || !mem_dies_at_call (loc->loc))
 	    {
 	      locp = &loc->next;
@@ -5932,12 +5932,12 @@ reverse_op (rtx val, const_rtx expr, rtx_insn *insn)
       if (GET_MODE (v->val_rtx) != GET_MODE (val))
 	return;
       arg = XEXP (src, 1);
-      if (!CONST_INT_P (arg) && GET_CODE (arg) != SYMBOL_REF)
+      if (!CONST_INT_P (arg) && !SYMBOL_REF_P (arg))
 	{
 	  arg = cselib_expand_value_rtx (arg, scratch_regs, 5);
 	  if (arg == NULL_RTX)
 	    return;
-	  if (!CONST_INT_P (arg) && GET_CODE (arg) != SYMBOL_REF)
+	  if (!CONST_INT_P (arg) && !SYMBOL_REF_P (arg))
 	    return;
 	}
       ret = simplify_gen_binary (code, GET_MODE (val), val, arg);
@@ -6254,7 +6254,7 @@ prepare_call_arguments (basic_block bb, rtx_insn *insn)
   call = get_call_rtx_from (insn);
   if (call)
     {
-      if (GET_CODE (XEXP (XEXP (call, 0), 0)) == SYMBOL_REF)
+      if (SYMBOL_REF_P (XEXP (XEXP (call, 0), 0)))
 	{
 	  rtx symbol = XEXP (XEXP (call, 0), 0);
 	  if (SYMBOL_REF_DECL (symbol))
@@ -6470,12 +6470,12 @@ prepare_call_arguments (basic_block bb, rtx_insn *insn)
 		    /* Try harder, when passing address of a constant
 		       pool integer it can be easily read back.  */
 		    item = XEXP (item, 1);
-		    if (GET_CODE (item) == SUBREG)
+		    if (SUBREG_P (item))
 		      item = SUBREG_REG (item);
 		    gcc_assert (GET_CODE (item) == VALUE);
 		    val = CSELIB_VAL_PTR (item);
 		    for (l = val->locs; l; l = l->next)
-		      if (GET_CODE (l->loc) == SYMBOL_REF
+		      if (SYMBOL_REF_P (l->loc)
 			  && TREE_CONSTANT_POOL_ADDRESS_P (l->loc)
 			  && SYMBOL_REF_DECL (l->loc)
 			  && DECL_INITIAL (SYMBOL_REF_DECL (l->loc)))
@@ -6536,7 +6536,7 @@ prepare_call_arguments (basic_block bb, rtx_insn *insn)
   if (x)
     {
       x = XEXP (XEXP (x, 0), 0);
-      if (GET_CODE (x) == SYMBOL_REF)
+      if (SYMBOL_REF_P (x))
 	/* Don't record anything.  */;
       else if (CONSTANT_P (x))
 	{
@@ -6832,10 +6832,10 @@ compute_bb_dataflow (basic_block bb)
 
 	      if (VAL_HOLDS_TRACK_EXPR (loc))
 		{
-		  if (GET_CODE (uloc) == REG)
+		  if (REG_P (uloc))
 		    var_reg_set (out, uloc, VAR_INIT_STATUS_UNINITIALIZED,
 				 NULL);
-		  else if (GET_CODE (uloc) == MEM)
+		  else if (MEM_P (uloc))
 		    var_mem_set (out, uloc, VAR_INIT_STATUS_UNINITIALIZED,
 				 NULL);
 		}
@@ -6885,7 +6885,7 @@ compute_bb_dataflow (basic_block bb)
 	      else if (VAL_NEEDS_RESOLUTION (loc))
 		{
 		  gcc_assert (GET_CODE (uloc) == SET
-			      && GET_CODE (SET_SRC (uloc)) == REG);
+			      && REG_P (SET_SRC (uloc)));
 		  val_resolve (out, val, SET_SRC (uloc), insn);
 		}
 
@@ -6943,7 +6943,7 @@ compute_bb_dataflow (basic_block bb)
 		var_regno_delete (out, REGNO (uloc));
 	      else if (MEM_P (uloc))
 		{
-		  gcc_checking_assert (GET_CODE (vloc) == MEM);
+		  gcc_checking_assert (MEM_P (vloc));
 		  gcc_checking_assert (dstv == vloc);
 		  if (dstv != vloc)
 		    clobber_overlapping_mems (out, vloc);
@@ -8726,7 +8726,7 @@ emit_note_insn_var_location (variable **varp, emit_note_data *data)
 	    continue;
 	  offset = VAR_PART_OFFSET (var, i);
 	  loc2 = var->var_part[i].cur_loc;
-	  if (loc2 && GET_CODE (loc2) == MEM
+	  if (loc2 && MEM_P (loc2)
 	      && GET_CODE (XEXP (loc2, 0)) == VALUE)
 	    {
 	      rtx depval = XEXP (loc2, 0);
@@ -8998,7 +8998,7 @@ notify_dependents_of_changed_value (rtx val, variable_table_type *htab,
 		{
 		  rtx loc = ivar->var_part[i].cur_loc;
 
-		  if (loc && GET_CODE (loc) == MEM
+		  if (loc && MEM_P (loc)
 		      && XEXP (loc, 0) == val)
 		    {
 		      variable_was_changed (ivar, NULL);
@@ -9330,10 +9330,10 @@ emit_notes_in_bb (basic_block bb, dataflow_set *set)
 
 	      if (VAL_HOLDS_TRACK_EXPR (loc))
 		{
-		  if (GET_CODE (uloc) == REG)
+		  if (REG_P (uloc))
 		    var_reg_set (set, uloc, VAR_INIT_STATUS_UNINITIALIZED,
 				 NULL);
-		  else if (GET_CODE (uloc) == MEM)
+		  else if (MEM_P (uloc))
 		    var_mem_set (set, uloc, VAR_INIT_STATUS_UNINITIALIZED,
 				 NULL);
 		}
@@ -9385,7 +9385,7 @@ emit_notes_in_bb (basic_block bb, dataflow_set *set)
 	      else if (VAL_NEEDS_RESOLUTION (loc))
 		{
 		  gcc_assert (GET_CODE (uloc) == SET
-			      && GET_CODE (SET_SRC (uloc)) == REG);
+			      && REG_P (SET_SRC (uloc)));
 		  val_resolve (set, val, SET_SRC (uloc), insn);
 		}
 
@@ -9437,7 +9437,7 @@ emit_notes_in_bb (basic_block bb, dataflow_set *set)
 		var_regno_delete (set, REGNO (uloc));
 	      else if (MEM_P (uloc))
 		{
-		  gcc_checking_assert (GET_CODE (vloc) == MEM);
+		  gcc_checking_assert (MEM_P (vloc));
 		  gcc_checking_assert (vloc == dstv);
 		  if (vloc != dstv)
 		    clobber_overlapping_mems (set, vloc);
diff --git a/gcc/varasm.c b/gcc/varasm.c
index e886cdc71b8..036d8e455f2 100644
--- a/gcc/varasm.c
+++ b/gcc/varasm.c
@@ -1372,7 +1372,7 @@ make_decl_rtl (tree decl)
       /* If the symbol has a SYMBOL_REF_BLOCK field, update it based
 	 on the new decl information.  */
       if (MEM_P (x)
-	  && GET_CODE (XEXP (x, 0)) == SYMBOL_REF
+	  && SYMBOL_REF_P (XEXP (x, 0))
 	  && SYMBOL_REF_HAS_BLOCK_INFO_P (XEXP (x, 0)))
 	change_symbol_block (XEXP (x, 0), get_block_for_decl (decl));
 
@@ -1720,7 +1720,7 @@ get_fnname_from_decl (tree decl)
   rtx x = DECL_RTL (decl);
   gcc_assert (MEM_P (x));
   x = XEXP (x, 0);
-  gcc_assert (GET_CODE (x) == SYMBOL_REF);
+  gcc_assert (SYMBOL_REF_P (x));
   return XSTR (x, 0);
 }
 
@@ -2247,7 +2247,7 @@ assemble_variable (tree decl, int top_level ATTRIBUTE_UNUSED,
     }
 
   gcc_assert (MEM_P (decl_rtl));
-  gcc_assert (GET_CODE (XEXP (decl_rtl, 0)) == SYMBOL_REF);
+  gcc_assert (SYMBOL_REF_P (XEXP (decl_rtl, 0)));
   symbol = XEXP (decl_rtl, 0);
 
   /* If this symbol belongs to the tree constant pool, output the constant
@@ -2433,7 +2433,7 @@ assemble_external_real (tree decl)
 {
   rtx rtl = DECL_RTL (decl);
 
-  if (MEM_P (rtl) && GET_CODE (XEXP (rtl, 0)) == SYMBOL_REF
+  if (MEM_P (rtl) && SYMBOL_REF_P (XEXP (rtl, 0))
       && !SYMBOL_REF_USED (XEXP (rtl, 0))
       && !incorporeal_function_p (decl))
     {
@@ -2813,7 +2813,7 @@ assemble_integer (rtx x, unsigned int size, unsigned int align, int force)
 
       subsize = size > UNITS_PER_WORD? UNITS_PER_WORD : 1;
       subalign = MIN (align, subsize * BITS_PER_UNIT);
-      if (GET_CODE (x) == CONST_FIXED)
+      if (CONST_FIXED_P (x))
 	mclass = GET_MODE_CLASS (GET_MODE (x));
       else
 	mclass = MODE_INT;
@@ -3900,7 +3900,7 @@ force_const_mem (machine_mode in_mode, rtx x)
 
   /* If we're dropping a label to the constant pool, make sure we
      don't delete it.  */
-  if (GET_CODE (x) == LABEL_REF)
+  if (LABEL_REF_P (x))
     LABEL_PRESERVE_P (XEXP (x, 0)) = 1;
 
   return copy_rtx (def);
@@ -3973,7 +3973,7 @@ output_constant_pool_2 (fixed_size_mode mode, rtx x, unsigned int align)
 
     case MODE_VECTOR_BOOL:
       {
-	gcc_assert (GET_CODE (x) == CONST_VECTOR);
+	gcc_assert (CONST_VECTOR_P (x));
 
 	/* Pick the smallest integer mode that contains at least one
 	   whole element.  Often this is byte_mode and contains more
@@ -4008,7 +4008,7 @@ output_constant_pool_2 (fixed_size_mode mode, rtx x, unsigned int align)
 	scalar_mode submode = GET_MODE_INNER (mode);
 	unsigned int subalign = MIN (align, GET_MODE_BITSIZE (submode));
 
-	gcc_assert (GET_CODE (x) == CONST_VECTOR);
+	gcc_assert (CONST_VECTOR_P (x));
 	units = GET_MODE_NUNITS (mode);
 
 	for (i = 0; i < units; i++)
@@ -4049,7 +4049,7 @@ output_constant_pool_1 (class constant_descriptor_rtx *desc,
     {
     case CONST:
       if (GET_CODE (XEXP (tmp, 0)) != PLUS
-	  || GET_CODE (XEXP (XEXP (tmp, 0), 0)) != LABEL_REF)
+	  || !LABEL_REF_P (XEXP (XEXP (tmp, 0), 0)))
 	break;
       tmp = XEXP (XEXP (tmp, 0), 0);
       /* FALLTHRU  */
@@ -4130,7 +4130,7 @@ mark_constants_in_pattern (rtx insn)
   FOR_EACH_SUBRTX (iter, array, PATTERN (insn), ALL)
     {
       const_rtx x = *iter;
-      if (GET_CODE (x) == SYMBOL_REF)
+      if (SYMBOL_REF_P (x))
 	{
 	  if (CONSTANT_POOL_ADDRESS_P (x))
 	    {
@@ -5568,7 +5568,7 @@ mark_weak (tree decl)
   if (DECL_RTL_SET_P (decl)
       && MEM_P (DECL_RTL (decl))
       && XEXP (DECL_RTL (decl), 0)
-      && GET_CODE (XEXP (DECL_RTL (decl), 0)) == SYMBOL_REF)
+      && SYMBOL_REF_P (XEXP (DECL_RTL (decl), 0)))
     SYMBOL_REF_WEAK (XEXP (DECL_RTL (decl), 0)) = 1;
 }
 
@@ -6968,7 +6968,7 @@ default_encode_section_info (tree decl, rtx rtl, int first ATTRIBUTE_UNUSED)
   if (!MEM_P (rtl))
     return;
   symbol = XEXP (rtl, 0);
-  if (GET_CODE (symbol) != SYMBOL_REF)
+  if (!SYMBOL_REF_P (symbol))
     return;
 
   flags = SYMBOL_REF_FLAGS (symbol) & SYMBOL_FLAG_HAS_BLOCK_INFO;
@@ -7494,7 +7494,7 @@ place_block_symbol (rtx symbol)
 	  rtx target = DECL_RTL (snode->ultimate_alias_target ()->decl);
 
 	  gcc_assert (MEM_P (target)
-		      && GET_CODE (XEXP (target, 0)) == SYMBOL_REF
+		      && SYMBOL_REF_P (XEXP (target, 0))
 		      && SYMBOL_REF_HAS_BLOCK_INFO_P (XEXP (target, 0)));
 	  target = XEXP (target, 0);
 	  place_block_symbol (target);
diff --git a/gcc/xcoffout.h b/gcc/xcoffout.h
index d8a031268ff..0ab9e5d09c2 100644
--- a/gcc/xcoffout.h
+++ b/gcc/xcoffout.h
@@ -78,7 +78,7 @@ along with GCC; see the file COPYING3.  If not see
 									\
       /* If we are writing a function name, we must ensure that		\
 	 there is no storage-class suffix on the name.  */		\
-      if (CODE == N_FUN && GET_CODE (ADDR) == SYMBOL_REF)		\
+      if (CODE == N_FUN && SYMBOL_REF_P (ADDR))		\
 	{								\
 	  const char *_p = XSTR (ADDR, 0);				\
 	  if (*_p == '*')						\
-- 
2.21.0

>From 3ad93997b70d9b5fc575cf35f6e30852270a82f5 Mon Sep 17 00:00:00 2001
From: Arvind Sankar <nivedita@alum.mit.edu>
Date: Fri, 2 Aug 2019 15:00:08 -0400
Subject: [PATCH 3/3] Use rtx_code predicates instead of GET_CODE

---
 gcc/combine-stack-adj.c | 3 +--
 gcc/dwarf2out.c         | 3 +--
 gcc/rtlanal.c           | 3 +--
 3 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/gcc/combine-stack-adj.c b/gcc/combine-stack-adj.c
index 3638a1b10ee..f98a0d54c98 100644
--- a/gcc/combine-stack-adj.c
+++ b/gcc/combine-stack-adj.c
@@ -634,8 +634,7 @@ combine_stack_adjustments_for_block (basic_block bb)
 		      && GET_CODE (XEXP (XEXP (dest, 0), 1)) == PLUS
 		      && XEXP (XEXP (XEXP (dest, 0), 1), 0)
 			 == stack_pointer_rtx
-		      && GET_CODE (XEXP (XEXP (XEXP (dest, 0), 1), 1))
-		         == CONST_INT
+		      && CONST_INT_P (XEXP (XEXP (XEXP (dest, 0), 1), 1))
 		      && INTVAL (XEXP (XEXP (XEXP (dest, 0), 1), 1))
 		         == -last_sp_adjust))
 	      && XEXP (XEXP (dest, 0), 0) == stack_pointer_rtx
diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c
index b2b4f6d82b2..ea38963d177 100644
--- a/gcc/dwarf2out.c
+++ b/gcc/dwarf2out.c
@@ -23747,8 +23747,7 @@ gen_variable_die (tree decl, tree origin, dw_die_ref context_die)
 		      if (single_element_loc_list_p (loc)
 			  && loc->expr->dw_loc_opc == DW_OP_addr
 			  && loc->expr->dw_loc_next == NULL
-			  && GET_CODE (loc->expr->dw_loc_oprnd1.v.val_addr)
-			     == SYMBOL_REF)
+			  && SYMBOL_REF_P (loc->expr->dw_loc_oprnd1.v.val_addr))
 			{
 			  rtx x = loc->expr->dw_loc_oprnd1.v.val_addr;
 			  loc->expr->dw_loc_oprnd1.v.val_addr
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index 3ce8c1bb03e..adb0929a1aa 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -3347,8 +3347,7 @@ computed_jump_p (const rtx_insn *insn)
 
 	  for (i = len - 1; i >= 0; i--)
 	    if (GET_CODE (XVECEXP (pat, 0, i)) == USE
-		&& (GET_CODE (XEXP (XVECEXP (pat, 0, i), 0))
-		    == LABEL_REF))
+		&& LABEL_REF_P (XEXP (XVECEXP (pat, 0, i), 0)))
 	      {
 	        has_use_labelref = 1;
 	        break;
-- 
2.21.0


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]