[PATCH] Prepare for prefixed instructions on PowerPC

Michael Meissner meissner@linux.ibm.com
Thu Jun 27 20:18:00 GMT 2019


A future PowerPC machine may have prefixed instructions.  This patch changes
the RTL "length" attribute of all of the "mov*", and "*extend*" insns from an
explicit "4" to "*".  This change prepares for the length attribute to be set
appropriately for single instruction loads, stores, and add immediates that may
include prefixed instructions.  The idea is to do this patch now, rather than
having these lines be part of the larger patches.

As we discussed off-line earlier, I changed all of the "4" lengths to be "*",
even for instruction alternatives that would not be subject to being changed to
be a prefixed instruction (i.e. the sign extend instruction "extsw"'s length is
set to "*", even though it would not be a prefixed instruction).  I also
changed the Altivec load/store instructions, as well as a power8 permute
instruction just be consistant with the other changes.

This patch does not fix the places where the length is not "4".  Those lengths
will be updated as needed as the rest of the prefixed instruction support is
rolled out.  It is intended to be a simple patch to make the other patches
somewhat simpler.

I did a bootstrap and make check.  There were no regressions.  Can I check this
patch into the trunk?

2019-06-27  Michael Meissner  <meissner@linux.ibm.com>

	* config/rs6000/altivec.md (altivec_mov<mode>, VM2 iterator):
	Change the RTL attribute "length" from "4" to "*" to allow the
	length attribute to be adjusted automatically for prefixed load,
	store, and add immediate instructions.
	* config/rs6000/rs6000.md (extendhi<mode>2, EXTHI iterator):
	Likewise.
	(extendsi<mode>2, EXTSI iterator): Likewise.
	(movsi_internal1): Likewise.
	(movsi_from_sf): Likewise.
	(movdi_from_sf_zero_ext): Likewise.
	(mov<mode>_internal): Likewise.
	(movcc_internal1, QHI iterator): Likewise.
	(mov<mode>_softfloat, FMOVE32 iterator): Likewise.
	(movsf_from_si): Likewise.
	(mov<mode>_hardfloat32, FMOVE64 iterator): Likewise.
	(mov<mode>_softfloat64, FMOVE64 iterator): Likewise.
	(mov<mode>, FMOVE128 iterator): Likewise.
	(movdi_internal64): Likewise.
	* config/rs6000/vsx.md (vsx_le_permute_<mode>, VSX_TI iterator):
	Likewise.
	(vsx_le_undo_permute_<mode>, VSX_TI iterator): Likewise.
	(vsx_mov<mode>_64bit, VSX_M iterator): Likewise.
	(vsx_mov<mode>_32bit, VSX_M iterator): Likewise.
	(vsx_splat_v4sf): Likewise.

Index: gcc/config/rs6000/altivec.md
===================================================================
--- gcc/config/rs6000/altivec.md	(revision 272714)
+++ gcc/config/rs6000/altivec.md	(working copy)
@@ -256,7 +256,7 @@ (define_insn "*altivec_mov<mode>"
    * return output_vec_const_move (operands);
    #"
   [(set_attr "type" "vecstore,vecload,veclogical,store,load,*,veclogical,*,*")
-   (set_attr "length" "4,4,4,20,20,20,4,8,32")])
+   (set_attr "length" "*,*,*,20,20,20,*,8,32")])
 
 ;; Unlike other altivec moves, allow the GPRs, since a normal use of TImode
 ;; is for unions.  However for plain data movement, slightly favor the vector
Index: gcc/config/rs6000/rs6000.md
===================================================================
--- gcc/config/rs6000/rs6000.md	(revision 272719)
+++ gcc/config/rs6000/rs6000.md	(working copy)
@@ -965,7 +965,7 @@ (define_insn "*extendhi<mode>2"
    vextsh2d %0,%1"
   [(set_attr "type" "load,exts,fpload,vecperm")
    (set_attr "sign_extend" "yes")
-   (set_attr "length" "4,4,8,4")
+   (set_attr "length" "*,*,8,*")
    (set_attr "isa" "*,*,p9v,p9v")])
 
 (define_split
@@ -1040,7 +1040,7 @@ (define_insn "extendsi<mode>2"
    #"
   [(set_attr "type" "load,exts,fpload,fpload,mffgpr,vecexts,vecperm,mftgpr")
    (set_attr "sign_extend" "yes")
-   (set_attr "length" "4,4,4,4,4,4,8,8")
+   (set_attr "length" "*,*,*,*,*,*,8,8")
    (set_attr "isa" "*,*,p6,p8v,p8v,p9v,p8v,p8v")])
 
 (define_split
@@ -6915,11 +6915,11 @@ (define_insn "*movsi_internal1"
 		 veclogical, veclogical,  vecsimple,   mffgpr,      mftgpr,
 		 *,          *,           *")
    (set_attr "length"
-		"4,          4,           4,           4,           4,
-		 4,          4,           4,           4,           4,
-		 8,          4,           4,           4,           4,
-		 4,          4,           8,           4,           4,
-		 4,          4,           4")
+		"*,          *,           *,           *,           *,
+		 *,          *,           *,           *,           *,
+		 8,          *,           *,           *,           *,
+		 *,          *,           8,           *,           *,
+		 *,          *,           *")
    (set_attr "isa"
 		"*,          *,           *,           p8v,         p8v,
 		 *,          p8v,         p8v,         *,           *,
@@ -6995,9 +6995,9 @@ (define_insn_and_split "movsi_from_sf"
 		 fpstore,    fpstore,     fpstore,     mftgpr,   fp,
 		 mffgpr")
    (set_attr "length"
-		"4,          4,           4,           4,        4,
-		 4,          4,           4,           8,        4,
-		 4")
+		"*,          *,           *,           *,        *,
+		 *,          *,           *,           8,        *,
+		 *")
    (set_attr "isa"
 		"*,          *,           p8v,         p8v,      *,
 		 *,          p9v,         p8v,         p8v,      p8v,
@@ -7049,8 +7049,8 @@ (define_insn_and_split "*movdi_from_sf_z
 		"*,          load,        fpload,      fpload,   two,
 		 two,        mffgpr")
    (set_attr "length"
-		"4,          4,           4,           4,        8,
-		 8,          4")
+		"*,          *,           *,           *,        8,
+		 8,          *")
    (set_attr "isa"
 		"*,          *,           p8v,         p8v,      p8v,
 		 p9v,        p8v")])
@@ -7178,9 +7178,9 @@ (define_insn "*mov<mode>_internal"
 		 vecsimple, vecperm,   vecperm,   vecperm,   vecperm,   mftgpr,
 		 mffgpr,    mfjmpr,    mtjmpr,    *")
    (set_attr "length"
-		"4,         4,         4,         4,         4,         4,
-		 4,         4,         4,         4,         8,         4,
-		 4,         4,         4,         4")
+		"*,         *,         *,         *,         *,         *,
+		 *,         *,         *,         *,         8,         *,
+		 *,         *,         *,         *")
    (set_attr "isa"
 		"*,         *,         p9v,       *,         p9v,       *,
 		 p9v,       p9v,       p9v,       p9v,       p9v,       p9v,
@@ -7231,7 +7231,7 @@ (define_insn "*movcc_internal1"
       (const_string "mtjmpr")
       (const_string "load")
       (const_string "store")])
-   (set_attr "length" "4,4,12,4,4,8,4,4,4,4,4,4")])
+   (set_attr "length" "*,*,12,*,*,8,*,*,*,*,*,*")])
 
 ;; For floating-point, we normally deal with the floating-point registers
 ;; unless -msoft-float is used.  The sole exception is that parameter passing
@@ -7385,8 +7385,8 @@ (define_insn "*mov<mode>_softfloat"
          *,          *,         *,         *")
 
    (set_attr "length"
-	"4,          4,         4,         4,         4,         4,
-         4,          4,         8,         4")])
+	"*,          *,         *,         *,         *,         *,
+         *,          *,         8,         *")])
 
 ;; Like movsf, but adjust a SI value to be used in a SF context, i.e.
 ;; (set (reg:SF ...) (subreg:SF (reg:SI ...) 0))
@@ -7448,8 +7448,8 @@ (define_insn_and_split "movsf_from_si"
   DONE;
 }
   [(set_attr "length"
-	    "4,          4,         4,         4,         4,         4,
-	     4,          12,        4,         4")
+	    "*,          *,         *,         *,         *,         *,
+	     *,          12,        *,         *")
    (set_attr "type"
 	    "load,       fpload,    fpload,    fpload,    store,     fpstore,
 	     fpstore,    vecfloat,  mffgpr,    *")
@@ -7586,8 +7586,8 @@ (define_insn "*mov<mode>_hardfloat32"
              store,       load,       two")
    (set_attr "size" "64")
    (set_attr "length"
-            "4,           4,          4,          4,          4,
-             4,           4,          4,          4,          8,
+            "*,           *,          *,          *,          *,
+             *,           *,          *,          *,          8,
              8,           8,          8")
    (set_attr "isa"
             "*,           *,          *,          p9v,        p9v,
@@ -7696,8 +7696,8 @@ (define_insn "*mov<mode>_softfloat64"
              *,       *,      *")
 
    (set_attr "length"
-            "4,       4,      4,      4,      4,      8,
-             12,      16,     4")])
+            "*,       *,      *,      *,      *,      8,
+             12,      16,     *")])
 
 (define_expand "mov<mode>"
   [(set (match_operand:FMOVE128 0 "general_operand")
@@ -8760,10 +8760,10 @@ (define_insn "*movdi_internal32"
           vecsimple")
    (set_attr "size" "64")
    (set_attr "length"
-         "8,         8,         8,         4,         4,         4,
-          16,        4,         4,         4,         4,         4,
-          4,         4,         4,         4,         4,         8,
-          4")
+         "8,         8,         8,         *,         *,         *,
+          16,        *,         *,         *,         *,         *,
+          *,         *,         *,         *,         *,         8,
+          *")
    (set_attr "isa"
          "*,         *,         *,         *,         *,         *,
           *,         p9v,       p7v,       p9v,       p7v,       *,
@@ -8853,11 +8853,11 @@ (define_insn "*movdi_internal64"
                 mftgpr,    mffgpr")
    (set_attr "size" "64")
    (set_attr "length"
-               "4,         4,         4,         4,         4,          20,
-                4,         4,         4,         4,         4,          4,
-                4,         4,         4,         4,         4,          4,
-                4,         8,         4,         4,         4,          4,
-                4,         4")
+               "*,         *,         *,         *,         *,          20,
+                *,         *,         *,         *,         *,          *,
+                *,         *,         *,         *,         *,          *,
+                *,         8,         *,         *,         *,          *,
+                *,         *")
    (set_attr "isa"
                "*,         *,         *,         *,         *,          *,
                 *,         *,         *,         p9v,       p7v,        p9v,
Index: gcc/config/rs6000/vsx.md
===================================================================
--- gcc/config/rs6000/vsx.md	(revision 272714)
+++ gcc/config/rs6000/vsx.md	(working copy)
@@ -923,7 +923,7 @@ (define_insn "*vsx_le_permute_<mode>"
    mr %0,%L1\;mr %L0,%1
    ld%U1%X1 %0,%L1\;ld%U1%X1 %L0,%1
    std%U0%X0 %L1,%0\;std%U0%X0 %1,%L0"
-  [(set_attr "length" "4,4,4,8,8,8")
+  [(set_attr "length" "*,*,*,8,8,8")
    (set_attr "type" "vecperm,vecload,vecstore,*,load,store")])
 
 (define_insn_and_split "*vsx_le_undo_permute_<mode>"
@@ -1150,9 +1150,9 @@ (define_insn "vsx_mov<mode>_64bit"
                 store,     load,      store,     *,         vecsimple, vecsimple,
                 vecsimple, *,         *,         vecstore,  vecload")
    (set_attr "length"
-               "4,         4,         4,         8,         4,         8,
-                8,         8,         8,         8,         4,         4,
-                4,         20,        8,         4,         4")
+               "*,         *,         *,         8,         *,         8,
+                8,         8,         8,         8,         *,         *,
+                *,         20,        8,         *,         *")
    (set_attr "isa"
                "<VSisa>,   <VSisa>,   <VSisa>,   *,         *,         *,
                 *,         *,         *,         *,         p9v,       *,
@@ -1183,9 +1183,9 @@ (define_insn "*vsx_mov<mode>_32bit"
                 vecsimple, vecsimple, vecsimple, *,         *,
                 vecstore,  vecload")
    (set_attr "length"
-               "4,         4,         4,         16,        16,        16,
-                4,         4,         4,         20,        16,
-                4,         4")
+               "*,         *,         *,         16,        16,        16,
+                *,         *,         *,         20,        16,
+                *,         *")
    (set_attr "isa"
                "<VSisa>,   <VSisa>,   <VSisa>,   *,         *,         *,
                 p9v,       *,         <VSisa>,   *,         *,
@@ -4112,7 +4112,7 @@ (define_insn_and_split "vsx_splat_v4sf"
 		      (const_int 0)] UNSPEC_VSX_XXSPLTW))]
   ""
   [(set_attr "type" "vecload,vecperm,mftgpr")
-   (set_attr "length" "4,8,4")
+   (set_attr "length" "*,8,*")
    (set_attr "isa" "*,p8v,*")])
 
 ;; V4SF/V4SI splat from a vector element

-- 
Michael Meissner, IBM
IBM, M/S 2506R, 550 King Street, Littleton, MA 01460-6245, USA
email: meissner@linux.ibm.com, phone: +1 (978) 899-4797



More information about the Gcc-patches mailing list