This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
[power7-meissner] Fixup various VSX thinkos
- From: Michael Meissner <meissner at linux dot vnet dot ibm dot com>
- To: gcc-patches at gcc dot gnu dot org
- Date: Fri, 13 Feb 2009 18:25:27 -0500
- Subject: [power7-meissner] Fixup various VSX thinkos
Today's patch fixes up several thinkos in my last patch, and it now allows me
to build tests with various vector operations.
2009-02-13 Michael Meissner <meissner@linux.vnet.ibm.com>
* config.in: Update two comments.
* config/rs6000/vector.md (VEC_L): Add V2DI type.
(move<mode>): Use VEC_L to get all vector types, and delete the
separate integer mode move definitions.
(vector_load_<mode>): Ditto.
(vector_store_<mode>): Ditto.
(vector move splitters): Move GPR register splitters here from
altivec.md.
* config/rs6000/constraints.md ("j"): Add "j" constraint to match
the mode's 0 value.
* config/rs6000/rs6000.c (rs6000_hard_regno_nregs_internal): Only
count the FPRs as being 128 bits if the mode is a VSX type.
(rs6000_hard_regno_mode_ok): Ditto.
(rs6000_emit_minmax): Use new VSX_MODE instead of separate tests.
* config/rs6000/vsx.md (VSX_L): Add V2DImode.
(VSm): Rename from VSX_mem, add modes for integer vectors. Change
all uses.
(VSs): Rename from VSX_op, add modes for integer vectors. Change
all uses.
(VSr): New mode address to give the register class.
(mov<mode>_vsx): Use VSr to get the register preferences. Add
explicit 0 option.
(scalar double precision patterns): Do not use v register
constraint right now.
(logical patterns): Use VSr mode attribute for register
preferences.
* config/rs6000/rs6000.h (VSX_SCALAR_MODE): New macro.
(VSX_MODE): Ditto.
* config/rs6000/altivec.md (VM): New mode iterator for memory
operations. Add V2DI mode.
(mov_altivec_<mode>): Disable if -mvsx for all modes, not just
V4SFmode.
(gpr move splitters): Move to vector.md.
(and<mode>3_altivec): Use VM mode iterator, not V.
(ior<mode>3_altivec): Ditto.
(xor<mode>3_altivec): Ditto.
(one_cmpl<mode>2_altivec): Ditto.
(nor<mode>3_altivec): Ditto.
(andc<mode>3_altivec): Ditto.
* config/rs6000/rs6000.md (movdf_hardfloat): Back out vsx changes.
(movdf_hardfloat64_vsx): Delete.
Index: gcc/config.in
===================================================================
--- gcc/config.in (revision 144108)
+++ gcc/config.in (working copy)
@@ -334,13 +334,13 @@
#endif
-/* Define if your assembler supports popcntb field. */
+/* Define if your assembler supports popcntb instruction. */
#ifndef USED_FOR_TARGET
#undef HAVE_AS_POPCNTB
#endif
-/* Define if your assembler supports popcntd field. */
+/* Define if your assembler supports popcntd instruction. */
#ifndef USED_FOR_TARGET
#undef HAVE_AS_POPCNTD
#endif
Index: gcc/config/rs6000/vector.md
===================================================================
--- gcc/config/rs6000/vector.md (revision 144141)
+++ gcc/config/rs6000/vector.md (working copy)
@@ -31,12 +31,12 @@
(define_mode_iterator VEC_F [V4SF V2DF])
;; Vector logical modes
-(define_mode_iterator VEC_L [V4SI V8HI V16QI V4SF V2DF])
+(define_mode_iterator VEC_L [V4SI V8HI V16QI V4SF V2DF V2DI])
-;; Vector floating point move instructions.
+;; Vector move instructions.
(define_expand "mov<mode>"
- [(set (match_operand:VEC_F 0 "nonimmediate_operand" "")
- (match_operand:VEC_F 1 "any_operand" ""))]
+ [(set (match_operand:VEC_L 0 "nonimmediate_operand" "")
+ (match_operand:VEC_L 1 "any_operand" ""))]
"TARGET_ALTIVEC || TARGET_VSX"
{
rs6000_emit_move (operands[0], operands[1], <MODE>mode);
@@ -46,8 +46,8 @@
;; Generic vector floating point load/store instructions. These will match
;; insns defined in vsx.md or altivec.md depending on the switches.
(define_expand "vector_load_<mode>"
- [(set (match_operand:VEC_F 0 "vfloat_operand" "")
- (match_operand:VEC_F 1 "memory_operand" ""))]
+ [(set (match_operand:VEC_L 0 "vfloat_operand" "")
+ (match_operand:VEC_L 1 "memory_operand" ""))]
"TARGET_ALTIVEC || TARGET_VSX"
{
rs6000_emit_move (operands[0], operands[1], <MODE>mode);
@@ -55,14 +55,27 @@
})
(define_expand "vector_store_<mode>"
- [(set (match_operand:VEC_F 0 "memory_operand" "")
- (match_operand:VEC_F 1 "vfloat_operand" ""))]
+ [(set (match_operand:VEC_L 0 "memory_operand" "")
+ (match_operand:VEC_L 1 "vfloat_operand" ""))]
"TARGET_ALTIVEC || TARGET_VSX"
{
rs6000_emit_move (operands[0], operands[1], <MODE>mode);
DONE;
})
+;; Splits if a GPR register was chosen for the move
+(define_split
+ [(set (match_operand:VEC_L 0 "nonimmediate_operand" "")
+ (match_operand:VEC_L 1 "input_operand" ""))]
+ "(TARGET_ALTIVEC || TARGET_VSX) && reload_completed
+ && gpr_or_gpr_p (operands[0], operands[1])"
+ [(pc)]
+{
+ rs6000_split_multireg_move (operands[0], operands[1]);
+ DONE;
+})
+
+
;; Generic floating point vector arithmetic support
(define_expand "add<mode>3"
[(set (match_operand:VEC_F 0 "vfloat_operand" "")
@@ -167,36 +180,6 @@
-;; Vector integer move instructions.
-(define_expand "mov<mode>"
- [(set (match_operand:VEC_I 0 "nonimmediate_operand" "")
- (match_operand:VEC_I 1 "any_operand" ""))]
- "TARGET_ALTIVEC"
-{
- rs6000_emit_move (operands[0], operands[1], <MODE>mode);
- DONE;
-})
-
-;; Generic vector integer load/store instructions.
-(define_expand "vector_load_<mode>"
- [(set (match_operand:VEC_I 0 "vint_operand" "")
- (match_operand:VEC_I 1 "memory_operand" ""))]
- "TARGET_ALTIVEC"
-{
- rs6000_emit_move (operands[0], operands[1], <MODE>mode);
- DONE;
-})
-
-(define_expand "vector_store_<mode>"
- [(set (match_operand:VEC_I 0 "memory_operand" "")
- (match_operand:VEC_I 1 "vint_operand" ""))]
- "TARGET_ALTIVEC"
-{
- rs6000_emit_move (operands[0], operands[1], <MODE>mode);
- DONE;
-})
-
-
;; Vector logical instructions
(define_expand "xor<mode>3"
[(set (match_operand:VEC_L 0 "vlogical_operand" "")
Index: gcc/config/rs6000/constraints.md
===================================================================
--- gcc/config/rs6000/constraints.md (revision 144106)
+++ gcc/config/rs6000/constraints.md (working copy)
@@ -159,3 +159,7 @@
(define_constraint "W"
"vector constant that does not require memory"
(match_operand 0 "easy_vector_constant"))
+
+(define_constraint "j"
+ "Zero vector constant"
+ (match_test "(op == const0_rtx || op == CONST0_RTX (GET_MODE (op)))"))
Index: gcc/config/rs6000/rs6000.c
===================================================================
--- gcc/config/rs6000/rs6000.c (revision 144141)
+++ gcc/config/rs6000/rs6000.c (working copy)
@@ -1342,7 +1342,7 @@ struct gcc_target targetm = TARGET_INITI
static int
rs6000_hard_regno_nregs_internal (int regno, enum machine_mode mode)
{
- if (TARGET_VSX && VSX_REGNO_P (regno))
+ if (TARGET_VSX && VSX_REGNO_P (regno) && VSX_MODE (mode))
return (GET_MODE_SIZE (mode) + UNITS_PER_VSX_WORD - 1) / UNITS_PER_VSX_WORD;
if (FP_REGNO_P (regno))
@@ -1374,7 +1374,7 @@ rs6000_hard_regno_mode_ok (int regno, en
{
/* VSX registers that overlap the FPR registers are larger than for non-VSX
implementations. */
- if (TARGET_VSX && VSX_REGNO_P (regno) && VSX_VECTOR_MODE (mode))
+ if (TARGET_VSX && VSX_REGNO_P (regno) && VSX_MODE (mode))
return VSX_REGNO_P (regno + rs6000_hard_regno_nregs[mode][regno] - 1);
/* The GPRs can hold any mode, but values bigger than one register
@@ -3095,6 +3095,9 @@ output_vec_const_move (rtx *operands)
vec = operands[1];
mode = GET_MODE (dest);
+ if (TARGET_VSX && zero_constant (vec, mode))
+ return "xxlxor %x0,%x0,%x0";
+
if (TARGET_ALTIVEC)
{
rtx splat_vec;
@@ -13923,7 +13926,7 @@ rs6000_emit_minmax (rtx dest, enum rtx_c
/* VSX/altivec have direct min/max insns. */
if ((code == SMAX || code == SMIN)
- && ((TARGET_VSX && (mode == DFmode || VSX_VECTOR_MODE (mode)))
+ && ((TARGET_VSX && VSX_MODE (mode))
|| (TARGET_ALTIVEC && ALTIVEC_VECTOR_MODE (mode))))
{
emit_insn (gen_rtx_SET (VOIDmode,
Index: gcc/config/rs6000/vsx.md
===================================================================
--- gcc/config/rs6000/vsx.md (revision 144141)
+++ gcc/config/rs6000/vsx.md (working copy)
@@ -24,251 +24,273 @@
(define_mode_iterator VSX_F [V4SF V2DF])
;; Iterator for logical types supported by VSX
-(define_mode_iterator VSX_L [V4SI V8HI V16QI V4SF V2DF])
+(define_mode_iterator VSX_L [V4SI V8HI V16QI V4SF V2DF V2DI])
;; Map into the appropriate load/store name based on the type
-(define_mode_attr VSX_mem [(V4SF "vw4") (V2DF "vd2")])
+(define_mode_attr VSm [(V16QI "vw4")
+ (V8HI "vw4")
+ (V4SI "vw4")
+ (V4SF "vw4")
+ (V2DF "vd2")
+ (V2DI "vd2")])
;; Map into the appropriate suffix based on the type
-(define_mode_attr VSX_op [(V4SF "sp") (V2DF "dp")])
+(define_mode_attr VSs [(V16QI "sp")
+ (V8HI "sp")
+ (V4SI "sp")
+ (V4SF "sp")
+ (V2DF "dp")
+ (V2DI "dp")])
+
+;; Map into the register class used for the move instructions
+(define_mode_attr VSr [(V16QI "v")
+ (V8HI "v")
+ (V4SI "v")
+ (V4SF "fv")
+ (V2DI "v")
+ (V2DF "fv")])
;; VSX move instructions
(define_insn "*mov<mode>_vsx"
- [(set (match_operand:VSX_F 0 "nonimmediate_operand" "=Z,fv,fv,o,r,r,fv")
- (match_operand:VSX_F 1 "input_operand" "fv,Z,fv,r,o,r,W"))]
+ [(set (match_operand:VSX_L 0 "nonimmediate_operand" "=Z,<VSr>,<VSr>,o,r,r,fv,v")
+ (match_operand:VSX_L 1 "input_operand" "<VSr>,Z,<VSr>,r,o,r,j,W"))]
"TARGET_VSX
&& (register_operand (operands[0], <MODE>mode)
|| register_operand (operands[1], <MODE>mode))"
{
switch (which_alternative)
{
- case 0: return "stx<VSX_mem>x %x1,%y0";
- case 1: return "lx<VSX_mem>x %x0,%y1";
- case 2: return "xvmov<VSX_op> %x0,%x1";
+ case 0: return "stx<VSm>x %x1,%y0";
+ case 1: return "lx<VSm>x %x0,%y1";
+ case 2: return "xvmov<VSs> %x0,%x1";
case 3: return "#";
case 4: return "#";
case 5: return "#";
- case 6: return output_vec_const_move (operands);
+ case 6: return "xxlxor %x0,%x0,%x0";
+ case 7: return output_vec_const_move (operands);
default: gcc_unreachable ();
}
}
- [(set_attr "type" "vecstore,vecload,vecsimple,store,load,*,*")])
+ [(set_attr "type" "vecstore,vecload,vecsimple,store,load,*,vecsimple,*")])
+
;; VSX vector floating point arithmetic instructions
(define_insn "*add<mode>3_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (plus:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (plus:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvadd<VSX_op> %x0,%x1,%x2"
+ "xvadd<VSs> %x0,%x1,%x2"
[(set_attr "type" "vecfloat")])
(define_insn "*sub<mode>3_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (minus:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (minus:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvsub<VSX_op> %x0,%x1,%x2"
+ "xvsub<VSs> %x0,%x1,%x2"
[(set_attr "type" "vecfloat")])
(define_insn "*mul<mode>3_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (mult:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (mult:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvmul<VSX_op> %x0,%x1,%x2"
+ "xvmul<VSs> %x0,%x1,%x2"
[(set_attr "type" "vecfloat")])
(define_insn "*div<mode>3_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (div:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (div:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvdiv<VSX_op> %x0,%x1,%x2"
+ "xvdiv<VSs> %x0,%x1,%x2"
[(set_attr "type" "vecfdiv")])
(define_insn "*neg<mode>2_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (neg:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (neg:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvneg<VSX_op> %x0,%x1"
+ "xvneg<VSs> %x0,%x1"
[(set_attr "type" "vecfloat")])
(define_insn "*abs<mode>2_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (abs:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (abs:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvabs<VSX_op> %x0,%x1"
+ "xvabs<VSs> %x0,%x1"
[(set_attr "type" "vecfloat")])
(define_insn "*nabs<mode>2_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (neg:VSX_F (abs:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv"))))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (neg:VSX_F
+ (abs:VSX_F
+ (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>"))))]
"TARGET_VSX"
- "xvnabs<VSX_op> %x0,%x1"
+ "xvnabs<VSs> %x0,%x1"
[(set_attr "type" "vecfloat")])
(define_insn "*smax<mode>3_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (smax:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (smax:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvmax<VSX_op> %x0,%x1,%x2"
+ "xvmax<VSs> %x0,%x1,%x2"
[(set_attr "type" "veccmp")])
(define_insn "*smin<mode>3_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (smin:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (smin:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvmin<VSX_op> %x0,%x1,%x2"
+ "xvmin<VSs> %x0,%x1,%x2"
[(set_attr "type" "veccmp")])
(define_insn "*sqrt<mode>2_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv")
- (sqrt:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>")
+ (sqrt:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
- "xvsqrt<VSX_op> %x0,%x1"
+ "xvsqrt<VSs> %x0,%x1"
[(set_attr "type" "vecfdiv")])
;; Fused vector multiply/add instructions
(define_insn "*fmadd<type>4_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv,fv")
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>,<VSr>")
(plus:VSX_F
(mult:VSX_F
- (match_operand:VSX_F 1 "vsx_register_operand" "%fv,fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv,0"))
- (match_operand:VSX_F 3 "vsx_register_operand" "0,fv")))]
+ (match_operand:VSX_F 1 "vsx_register_operand" "%<VSr>,<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>,0"))
+ (match_operand:VSX_F 3 "vsx_register_operand" "0,<VSr>")))]
"TARGET_VSX && TARGET_FUSED_MADD"
"@
- xvmadda<VSX_op> %x0,%x1,%x2
- xvmaddm<VSX_op> %x0,%x1,%x3"
+ xvmadda<VSs> %x0,%x1,%x2
+ xvmaddm<VSs> %x0,%x1,%x3"
[(set_attr "type" "vecfloat")])
(define_insn "*fmsub<type>4_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv,fv")
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>,<VSr>")
(minus:VSX_F
(mult:VSX_F
- (match_operand:VSX_F 1 "vsx_register_operand" "%fv,fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv,0"))
- (match_operand:VSX_F 3 "vsx_register_operand" "0,fv")))]
+ (match_operand:VSX_F 1 "vsx_register_operand" "%<VSr>,<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>,0"))
+ (match_operand:VSX_F 3 "vsx_register_operand" "0,<VSr>")))]
"TARGET_VSX && TARGET_FUSED_MADD"
"@
- xvmsuba<VSX_op> %x0,%x1,%x2
- xvmsubm<VSX_op> %x0,%x1,%x3"
+ xvmsuba<VSs> %x0,%x1,%x2
+ xvmsubm<VSs> %x0,%x1,%x3"
[(set_attr "type" "vecfloat")])
(define_insn "*fmadd<type>4_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv,fv")
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>,<VSr>")
(neg:VSX_F
(plus:VSX_F
(mult:VSX_F
- (match_operand:VSX_F 1 "vsx_register_operand" "%fv,fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv,0"))
- (match_operand:VSX_F 3 "vsx_register_operand" "0,fv"))))]
+ (match_operand:VSX_F 1 "vsx_register_operand" "%<VSr>,<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>,0"))
+ (match_operand:VSX_F 3 "vsx_register_operand" "0,<VSr>"))))]
"TARGET_VSX && TARGET_FUSED_MADD && HONOR_SIGNED_ZEROS (<MODE>mode)"
"@
- xvnmadda<VSX_op> %x0,%x1,%x2
- xvnmaddm<VSX_op> %x0,%x1,%x3"
+ xvnmadda<VSs> %x0,%x1,%x2
+ xvnmaddm<VSs> %x0,%x1,%x3"
[(set_attr "type" "vecfloat")])
(define_insn "*fmsub<type>4_vsx"
- [(set (match_operand:VSX_F 0 "vsx_register_operand" "=fv,fv")
+ [(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>,<VSr>")
(neg:VSX_F
(minus:VSX_F
(mult:VSX_F
- (match_operand:VSX_F 1 "vsx_register_operand" "%fv,fv")
- (match_operand:VSX_F 2 "vsx_register_operand" "fv,0"))
- (match_operand:VSX_F 3 "vsx_register_operand" "0,fv"))))]
+ (match_operand:VSX_F 1 "vsx_register_operand" "%<VSr>,<VSr>")
+ (match_operand:VSX_F 2 "vsx_register_operand" "<VSr>,0"))
+ (match_operand:VSX_F 3 "vsx_register_operand" "0,<VSr>"))))]
"TARGET_VSX && TARGET_FUSED_MADD && HONOR_SIGNED_ZEROS (<MODE>mode)"
"@
- xvnmsuba<VSX_op> %x0,%x1,%x2
- xvnmsubm<VSX_op> %x0,%x1,%x3"
+ xvnmsuba<VSs> %x0,%x1,%x2
+ xvnmsubm<VSs> %x0,%x1,%x3"
[(set_attr "type" "vecfloat")])
;; VSX scalar double precision floating point operations
(define_insn"*adddf3_vsx"
- [(set (match_operand:DF 0 "register_operand" "=fv")
- (plus:DF (match_operand:DF 1 "register_operand" "fv")
- (match_operand:DF 2 "register_operand" "fv")))]
+ [(set (match_operand:DF 0 "register_operand" "=f")
+ (plus:DF (match_operand:DF 1 "register_operand" "f")
+ (match_operand:DF 2 "register_operand" "f")))]
"TARGET_VSX"
"xsadddp %x0,%x1,%x2"
[(set_attr "type" "fp")
(set_attr "fp_type" "fp_addsub_d")])
(define_insn"*subdf3_vsx"
- [(set (match_operand:DF 0 "register_operand" "=fv")
- (minus:DF (match_operand:DF 1 "register_operand" "fv")
- (match_operand:DF 2 "register_operand" "fv")))]
+ [(set (match_operand:DF 0 "register_operand" "=f")
+ (minus:DF (match_operand:DF 1 "register_operand" "f")
+ (match_operand:DF 2 "register_operand" "f")))]
"TARGET_VSX"
"xssubdp %x0,%x1,%x2"
[(set_attr "type" "fp")
(set_attr "fp_type" "fp_addsub_d")])
(define_insn"*muldf3_vsx"
- [(set (match_operand:DF 0 "register_operand" "=fv")
- (mult:DF (match_operand:DF 1 "register_operand" "fv")
- (match_operand:DF 2 "register_operand" "fv")))]
+ [(set (match_operand:DF 0 "register_operand" "=f")
+ (mult:DF (match_operand:DF 1 "register_operand" "f")
+ (match_operand:DF 2 "register_operand" "f")))]
"TARGET_VSX"
"xsmuldp %x0,%x1,%x2"
[(set_attr "type" "dmul")
(set_attr "fp_type" "fp_mul_d")])
(define_insn"*divdf3_vsx"
- [(set (match_operand:DF 0 "register_operand" "=fv")
- (div:DF (match_operand:DF 1 "register_operand" "fv")
- (match_operand:DF 2 "register_operand" "fv")))]
+ [(set (match_operand:DF 0 "register_operand" "=f")
+ (div:DF (match_operand:DF 1 "register_operand" "f")
+ (match_operand:DF 2 "register_operand" "f")))]
"TARGET_VSX"
"xsdivdp %x0,%x1,%x2"
[(set_attr "type" "ddiv")])
(define_insn"*negdf2_vsx"
- [(set (match_operand:DF 0 "register_operand" "=fv")
- (neg:DF (match_operand:DF 1 "register_operand" "fv")))]
+ [(set (match_operand:DF 0 "register_operand" "=f")
+ (neg:DF (match_operand:DF 1 "register_operand" "f")))]
"TARGET_VSX"
"xsnegdp %x0,%x1"
[(set_attr "type" "fp")])
(define_insn"*absdf2_vsx"
- [(set (match_operand:DF 0 "register_operand" "=fv")
- (abs:DF (match_operand:DF 1 "register_operand" "fv")))]
+ [(set (match_operand:DF 0 "register_operand" "=f")
+ (abs:DF (match_operand:DF 1 "register_operand" "f")))]
"TARGET_VSX"
"xsabsdp %x0,%x1"
[(set_attr "type" "fp")])
(define_insn"*nabsdf2_vsx"
- [(set (match_operand:DF 0 "register_operand" "=fv")
- (neg:DF (abs:DF (match_operand:DF 1 "register_operand" "fv"))))]
+ [(set (match_operand:DF 0 "register_operand" "=f")
+ (neg:DF (abs:DF (match_operand:DF 1 "register_operand" "f"))))]
"TARGET_VSX"
"xsnabsdp %x0,%x1"
[(set_attr "type" "fp")])
(define_insn "*smaxdf3_vsx"
- [(set (match_operand:DF 0 "vsx_register_operand" "=fv")
- (smax:DF (match_operand:DF 1 "vsx_register_operand" "fv")
- (match_operand:DF 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:DF 0 "vsx_register_operand" "=f")
+ (smax:DF (match_operand:DF 1 "vsx_register_operand" "f")
+ (match_operand:DF 2 "vsx_register_operand" "f")))]
"TARGET_VSX"
"xsmaxdp %x0,%x1,%x2"
[(set_attr "type" "fp")])
(define_insn "*smindf3_vsx"
- [(set (match_operand:DF 0 "vsx_register_operand" "=fv")
- (smin:DF (match_operand:DF 1 "vsx_register_operand" "fv")
- (match_operand:DF 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:DF 0 "vsx_register_operand" "=f")
+ (smin:DF (match_operand:DF 1 "vsx_register_operand" "f")
+ (match_operand:DF 2 "vsx_register_operand" "f")))]
"TARGET_VSX"
"xsmindp %x0,%x1,%x2"
[(set_attr "type" "fp")])
;; Fused vector multiply/add instructions
(define_insn "*fmadddf4_vsx"
- [(set (match_operand:DF 0 "vsx_register_operand" "=fv,fv")
+ [(set (match_operand:DF 0 "vsx_register_operand" "=f,f")
(plus:DF
(mult:DF
- (match_operand:DF 1 "vsx_register_operand" "%fv,fv")
- (match_operand:DF 2 "vsx_register_operand" "fv,0"))
- (match_operand:DF 3 "vsx_register_operand" "0,fv")))]
+ (match_operand:DF 1 "vsx_register_operand" "%f,f")
+ (match_operand:DF 2 "vsx_register_operand" "f,0"))
+ (match_operand:DF 3 "vsx_register_operand" "0,f")))]
"TARGET_VSX && TARGET_FUSED_MADD"
"@
xsmaddadp %x0,%x1,%x2
@@ -277,12 +299,12 @@
(set_attr "fp_type" "fp_maddsub_d")])
(define_insn "*fmsubdf4_vsx"
- [(set (match_operand:DF 0 "vsx_register_operand" "=fv,fv")
+ [(set (match_operand:DF 0 "vsx_register_operand" "=f,f")
(minus:DF
(mult:DF
- (match_operand:DF 1 "vsx_register_operand" "%fv,fv")
- (match_operand:DF 2 "vsx_register_operand" "fv,0"))
- (match_operand:DF 3 "vsx_register_operand" "0,fv")))]
+ (match_operand:DF 1 "vsx_register_operand" "%f,f")
+ (match_operand:DF 2 "vsx_register_operand" "f,0"))
+ (match_operand:DF 3 "vsx_register_operand" "0,f")))]
"TARGET_VSX && TARGET_FUSED_MADD"
"@
xsmsubadp %x0,%x1,%x2
@@ -291,13 +313,13 @@
(set_attr "fp_type" "fp_maddsub_d")])
(define_insn "*fnmadddf4_vsx"
- [(set (match_operand:DF 0 "vsx_register_operand" "=fv,fv")
+ [(set (match_operand:DF 0 "vsx_register_operand" "=f,f")
(neg:DF
(plus:DF
(mult:DF
- (match_operand:DF 1 "vsx_register_operand" "%fv,fv")
- (match_operand:DF 2 "vsx_register_operand" "fv,0"))
- (match_operand:DF 3 "vsx_register_operand" "0,fv"))))]
+ (match_operand:DF 1 "vsx_register_operand" "%f,f")
+ (match_operand:DF 2 "vsx_register_operand" "f,0"))
+ (match_operand:DF 3 "vsx_register_operand" "0,f"))))]
"TARGET_VSX && TARGET_FUSED_MADD && HONOR_SIGNED_ZEROS (DFmode)"
"@
xsnmaddadp %x0,%x1,%x2
@@ -306,13 +328,13 @@
(set_attr "fp_type" "fp_maddsub_d")])
(define_insn "*fnmsubdf4_vsx"
- [(set (match_operand:DF 0 "vsx_register_operand" "=fv,fv")
+ [(set (match_operand:DF 0 "vsx_register_operand" "=f,f")
(neg:DF
(minus:DF
(mult:DF
- (match_operand:DF 1 "vsx_register_operand" "%fv,fv")
- (match_operand:DF 2 "vsx_register_operand" "fv,0"))
- (match_operand:DF 3 "vsx_register_operand" "0,fv"))))]
+ (match_operand:DF 1 "vsx_register_operand" "%f,f")
+ (match_operand:DF 2 "vsx_register_operand" "f,0"))
+ (match_operand:DF 3 "vsx_register_operand" "0,f"))))]
"TARGET_VSX && TARGET_FUSED_MADD && HONOR_SIGNED_ZEROS (DFmode)"
"@
xsnmsubadp %x0,%x1,%x2
@@ -323,48 +345,55 @@
;; Logical operations
(define_insn "*and<mode>3_vsx"
- [(set (match_operand:VSX_L 0 "vsx_register_operand" "=fv")
- (and:VSX_L (match_operand:VSX_L 1 "vsx_register_operand" "fv")
- (match_operand:VSX_L 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_L 0 "vsx_register_operand" "=<VSr>")
+ (and:VSX_L
+ (match_operand:VSX_L 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_L 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
"xxland %x0,%x1,%x2"
[(set_attr "type" "vecsimple")])
(define_insn "*ior<mode>3_vsx"
- [(set (match_operand:VSX_L 0 "vsx_register_operand" "=fv")
- (ior:VSX_L (match_operand:VSX_L 1 "vsx_register_operand" "fv")
- (match_operand:VSX_L 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_L 0 "vsx_register_operand" "=<VSr>")
+ (ior:VSX_L (match_operand:VSX_L 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_L 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
"xxlor %x0,%x1,%x2"
[(set_attr "type" "vecsimple")])
(define_insn "*xor<mode>3_vsx"
- [(set (match_operand:VSX_L 0 "vsx_register_operand" "=fv")
- (xor:VSX_L (match_operand:VSX_L 1 "vsx_register_operand" "fv")
- (match_operand:VSX_L 2 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_L 0 "vsx_register_operand" "=<VSr>")
+ (xor:VSX_L
+ (match_operand:VSX_L 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_L 2 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
"xxlxor %x0,%x1,%x2"
[(set_attr "type" "vecsimple")])
(define_insn "*one_cmpl<mode>2_vsx"
- [(set (match_operand:VSX_L 0 "vsx_register_operand" "=fv")
- (not:VSX_L (match_operand:VSX_L 1 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_L 0 "vsx_register_operand" "=<VSr>")
+ (not:VSX_L
+ (match_operand:VSX_L 1 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
"xxlnor %x0,%x1,%x1"
[(set_attr "type" "vecsimple")])
(define_insn "*nor<mode>3_vsx"
- [(set (match_operand:VSX_L 0 "vsx_register_operand" "=fv")
- (not:VSX_L (ior:VSX_L (match_operand:VSX_L 1 "vsx_register_operand" "fv")
- (match_operand:VSX_L 2 "vsx_register_operand" "fv"))))]
+ [(set (match_operand:VSX_L 0 "vsx_register_operand" "=<VSr>")
+ (not:VSX_L
+ (ior:VSX_L
+ (match_operand:VSX_L 1 "vsx_register_operand" "<VSr>")
+ (match_operand:VSX_L 2 "vsx_register_operand" "<VSr>"))))]
"TARGET_VSX"
"xxlnor %x0,%x1,%x2"
[(set_attr "type" "vecsimple")])
(define_insn "*andc<mode>3_vsx"
- [(set (match_operand:VSX_L 0 "vsx_register_operand" "=fv")
- (and:VSX_L (not:VSX_L (match_operand:VSX_L 2 "vsx_register_operand" "fv"))
- (match_operand:VSX_L 1 "vsx_register_operand" "fv")))]
+ [(set (match_operand:VSX_L 0 "vsx_register_operand" "=<VSr>")
+ (and:VSX_L
+ (not:VSX_L
+ (match_operand:VSX_L 2 "vsx_register_operand" "<VSr>"))
+ (match_operand:VSX_L 1 "vsx_register_operand" "<VSr>")))]
"TARGET_VSX"
"xxlandc %x0,%x1,%x2"
[(set_attr "type" "vecsimple")])
Index: gcc/config/rs6000/rs6000.h
===================================================================
--- gcc/config/rs6000/rs6000.h (revision 144141)
+++ gcc/config/rs6000/rs6000.h (working copy)
@@ -949,6 +949,13 @@ extern int rs6000_xilinx_fpu;
((MODE) == V4SFmode \
|| (MODE) == V2DFmode) \
+#define VSX_SCALAR_MODE(MODE) \
+ ((MODE) == DFmode)
+
+#define VSX_MODE(MODE) \
+ (VSX_VECTOR_MODE (MODE) \
+ || VSX_SCALAR_MODE (MODE))
+
#define ALTIVEC_VECTOR_MODE(MODE) \
((MODE) == V16QImode \
|| (MODE) == V8HImode \
Index: gcc/config/rs6000/altivec.md
===================================================================
--- gcc/config/rs6000/altivec.md (revision 144141)
+++ gcc/config/rs6000/altivec.md (working copy)
@@ -176,15 +176,17 @@
(define_mode_iterator VF [V4SF])
;; Vec modes, pity mode iterators are not composable
(define_mode_iterator V [V4SI V8HI V16QI V4SF])
+;; Vec modes for move/logical ops, include vector types for move not otherwise
+;; handled by altivec (v2df, v2di)
+(define_mode_iterator VM [V4SI V8HI V16QI V4SF V2DF V2DI])
(define_mode_attr VI_char [(V4SI "w") (V8HI "h") (V16QI "b")])
;; Altivec move instructions, prefer VSX if we have it
(define_insn "*mov_altivec_<mode>"
- [(set (match_operand:V 0 "nonimmediate_operand" "=Z,v,v,o,r,r,v")
- (match_operand:V 1 "input_operand" "v,Z,v,r,o,r,W"))]
- "TARGET_ALTIVEC
- && (<MODE>mode != V4SFmode && !TARGET_VSX)
+ [(set (match_operand:VM 0 "nonimmediate_operand" "=Z,v,v,o,r,r,v")
+ (match_operand:VM 1 "input_operand" "v,Z,v,r,o,r,W"))]
+ "TARGET_ALTIVEC && !TARGET_VSX
&& (register_operand (operands[0], <MODE>mode)
|| register_operand (operands[1], <MODE>mode))"
{
@@ -203,44 +205,8 @@
[(set_attr "type" "vecstore,vecload,vecsimple,store,load,*,*")])
(define_split
- [(set (match_operand:V4SI 0 "nonimmediate_operand" "")
- (match_operand:V4SI 1 "input_operand" ""))]
- "TARGET_ALTIVEC && reload_completed
- && gpr_or_gpr_p (operands[0], operands[1])"
- [(pc)]
-{
- rs6000_split_multireg_move (operands[0], operands[1]); DONE;
-})
-
-(define_split
- [(set (match_operand:V8HI 0 "nonimmediate_operand" "")
- (match_operand:V8HI 1 "input_operand" ""))]
- "TARGET_ALTIVEC && reload_completed
- && gpr_or_gpr_p (operands[0], operands[1])"
- [(pc)]
-{ rs6000_split_multireg_move (operands[0], operands[1]); DONE; })
-
-(define_split
- [(set (match_operand:V16QI 0 "nonimmediate_operand" "")
- (match_operand:V16QI 1 "input_operand" ""))]
- "TARGET_ALTIVEC && reload_completed
- && gpr_or_gpr_p (operands[0], operands[1])"
- [(pc)]
-{ rs6000_split_multireg_move (operands[0], operands[1]); DONE; })
-
-(define_split
- [(set (match_operand:V4SF 0 "nonimmediate_operand" "")
- (match_operand:V4SF 1 "input_operand" ""))]
- "(TARGET_ALTIVEC || TARGET_VSX) && reload_completed
- && gpr_or_gpr_p (operands[0], operands[1])"
- [(pc)]
-{
- rs6000_split_multireg_move (operands[0], operands[1]); DONE;
-})
-
-(define_split
- [(set (match_operand:V 0 "altivec_register_operand" "")
- (match_operand:V 1 "easy_vector_constant_add_self" ""))]
+ [(set (match_operand:VM 0 "altivec_register_operand" "")
+ (match_operand:VM 1 "easy_vector_constant_add_self" ""))]
"TARGET_ALTIVEC && reload_completed"
[(set (match_dup 0) (match_dup 3))
(set (match_dup 0) (match_dup 4))]
@@ -1074,48 +1040,48 @@
;; logical ops
(define_insn "*and<mode>3_altivec"
- [(set (match_operand:V 0 "register_operand" "=v")
- (and:V (match_operand:V 1 "register_operand" "v")
- (match_operand:V 2 "register_operand" "v")))]
+ [(set (match_operand:VM 0 "register_operand" "=v")
+ (and:VM (match_operand:VM 1 "register_operand" "v")
+ (match_operand:VM 2 "register_operand" "v")))]
"TARGET_ALTIVEC && !TARGET_VSX"
"vand %0,%1,%2"
[(set_attr "type" "vecsimple")])
(define_insn "*ior<mode>3_altivec"
- [(set (match_operand:V 0 "register_operand" "=v")
- (ior:V (match_operand:V 1 "register_operand" "v")
- (match_operand:V 2 "register_operand" "v")))]
+ [(set (match_operand:VM 0 "register_operand" "=v")
+ (ior:VM (match_operand:VM 1 "register_operand" "v")
+ (match_operand:VM 2 "register_operand" "v")))]
"TARGET_ALTIVEC && !TARGET_VSX"
"vor %0,%1,%2"
[(set_attr "type" "vecsimple")])
(define_insn "*xor<mode>3_altivec"
- [(set (match_operand:V 0 "register_operand" "=v")
- (xor:V (match_operand:V 1 "register_operand" "v")
- (match_operand:V 2 "register_operand" "v")))]
+ [(set (match_operand:VM 0 "register_operand" "=v")
+ (xor:VM (match_operand:VM 1 "register_operand" "v")
+ (match_operand:VM 2 "register_operand" "v")))]
"TARGET_ALTIVEC && !TARGET_VSX"
"vxor %0,%1,%2"
[(set_attr "type" "vecsimple")])
(define_insn "*one_cmpl<mode>2_altivec"
- [(set (match_operand:V 0 "register_operand" "=v")
- (not:V (match_operand:V 1 "register_operand" "v")))]
+ [(set (match_operand:VM 0 "register_operand" "=v")
+ (not:VM (match_operand:VM 1 "register_operand" "v")))]
"TARGET_ALTIVEC && !TARGET_VSX"
"vnor %0,%1,%1"
[(set_attr "type" "vecsimple")])
(define_insn "*nor<mode>3_altivec"
- [(set (match_operand:V 0 "register_operand" "=v")
- (not:V (ior:V (match_operand:V 1 "register_operand" "v")
- (match_operand:V 2 "register_operand" "v"))))]
+ [(set (match_operand:VM 0 "register_operand" "=v")
+ (not:VM (ior:VM (match_operand:VM 1 "register_operand" "v")
+ (match_operand:VM 2 "register_operand" "v"))))]
"TARGET_ALTIVEC && !TARGET_VSX"
"vnor %0,%1,%2"
[(set_attr "type" "vecsimple")])
(define_insn "*andc<mode>3_altivec"
- [(set (match_operand:V 0 "register_operand" "=v")
- (and:V (not:V (match_operand:V 2 "register_operand" "v"))
- (match_operand:V 1 "register_operand" "v")))]
+ [(set (match_operand:VM 0 "register_operand" "=v")
+ (and:VM (not:VM (match_operand:VM 2 "register_operand" "v"))
+ (match_operand:VM 1 "register_operand" "v")))]
"TARGET_ALTIVEC && !TARGET_VSX"
"vandc %0,%1,%2"
[(set_attr "type" "vecsimple")])
Index: gcc/config/rs6000/rs6000.md
===================================================================
--- gcc/config/rs6000/rs6000.md (revision 144141)
+++ gcc/config/rs6000/rs6000.md (working copy)
@@ -8705,7 +8705,7 @@
[(set (match_operand:DF 0 "nonimmediate_operand" "=Y,r,!r,f,f,m,*c*l,!r,*h,!r,!r,!r")
(match_operand:DF 1 "input_operand" "r,Y,r,f,m,f,r,h,0,G,H,F"))]
"TARGET_POWERPC64 && !TARGET_MFPGPR && TARGET_HARD_FLOAT && TARGET_FPRS
- && TARGET_DOUBLE_FLOAT && !TARGET_VSX
+ && TARGET_DOUBLE_FLOAT
&& (gpc_reg_operand (operands[0], DFmode)
|| gpc_reg_operand (operands[1], DFmode))"
"@
@@ -8724,33 +8724,6 @@
[(set_attr "type" "store,load,*,fp,fpload,fpstore,mtjmpr,mfjmpr,*,*,*,*")
(set_attr "length" "4,4,4,4,4,4,4,4,4,8,12,16")])
-; Like movdf_harfloat64 but add VSX support
-; List Y->r and r->Y before r->r for reload.
-(define_insn "*movdf_hardfloat64_vsx"
- [(set (match_operand:DF 0 "nonimmediate_operand" "=Y,r,!r,fv,fz,Z,f,m,*c*l,!r,*h,!r,!r,!r")
- (match_operand:DF 1 "input_operand" "r,Y,r,fv,Z,fv,m,f,r,h,0,G,H,F"))]
- "TARGET_POWERPC64 && TARGET_HARD_FLOAT && TARGET_FPRS
- && TARGET_DOUBLE_FLOAT && TARGET_VSX
- && (gpc_reg_operand (operands[0], DFmode)
- || gpc_reg_operand (operands[1], DFmode))"
- "@
- std%U0%X0 %1,%0
- ld%U1%X1 %0,%1
- mr %0,%1
- xsmovdp %x0,%x1
- lxsd%U1%X1 %x0,%1
- stxsd%U0%X0 %x1,%0
- lfd%U1%X1 %0,%1
- stfd%U0%X0 %1,%0
- mt%0 %1
- mf%1 %0
- {cror 0,0,0|nop}
- #
- #
- #"
- [(set_attr "type" "store,load,*,fp,fpload,fpstore,fpload,fpstore,mtjmpr,mfjmpr,*,*,*,*")
- (set_attr "length" "4,4,4,4,4,4,4,4,4,4,4,8,12,16")])
-
(define_insn "*movdf_softfloat64"
[(set (match_operand:DF 0 "nonimmediate_operand" "=r,Y,r,cl,r,r,r,r,*h")
(match_operand:DF 1 "input_operand" "Y,r,r,r,h,G,H,F,0"))]
--
Michael Meissner, IBM
4 Technology Place Drive, MS 2203A, Westford, MA, 01886, USA
meissner@linux.vnet.ibm.com