This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
[PATCH, rs6000] (1/3) Reverse meanings of multiply even/odd for little endian
- From: Bill Schmidt <wschmidt at linux dot vnet dot ibm dot com>
- To: gcc-patches at gcc dot gnu dot org
- Cc: dje dot gcc at gmail dot com
- Date: Sun, 03 Nov 2013 23:28:07 -0600
- Subject: [PATCH, rs6000] (1/3) Reverse meanings of multiply even/odd for little endian
- Authentication-results: sourceware.org; auth=none
Hi,
This patch reverses the meanings of multiply even/odd instructions for
little endian. Since these instructions use a big-endian idea of
evenness/oddness, the nominal meanings of the instructions is wrong for
little endian.
Bootstrapped and tested with the rest of the patch set on
powerpc64{,le}-unknown-linux-gnu with no regressions. Ok for trunk?
Thanks,
Bill
2013-11-03 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
* config/rs6000/altivec.md (vec_widen_umult_even_v16qi): Swap
meanings of even and odd multiplies for little endian.
(vec_widen_smult_even_v16qi): Likewise.
(vec_widen_umult_even_v8hi): Likewise.
(vec_widen_smult_even_v8hi): Likewise.
(vec_widen_umult_odd_v16qi): Likewise.
(vec_widen_smult_odd_v16qi): Likewise.
(vec_widen_umult_odd_v8hi): Likewise.
(vec_widen_smult_odd_v8hi): Likewise.
Index: gcc/config/rs6000/altivec.md
===================================================================
--- gcc/config/rs6000/altivec.md (revision 204192)
+++ gcc/config/rs6000/altivec.md (working copy)
@@ -978,7 +988,12 @@
(match_operand:V16QI 2 "register_operand" "v")]
UNSPEC_VMULEUB))]
"TARGET_ALTIVEC"
- "vmuleub %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmuleub %0,%1,%2";
+ else
+ return "vmuloub %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_smult_even_v16qi"
@@ -987,7 +1002,12 @@
(match_operand:V16QI 2 "register_operand" "v")]
UNSPEC_VMULESB))]
"TARGET_ALTIVEC"
- "vmulesb %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulesb %0,%1,%2";
+ else
+ return "vmulosb %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_umult_even_v8hi"
@@ -996,7 +1016,12 @@
(match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULEUH))]
"TARGET_ALTIVEC"
- "vmuleuh %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmuleuh %0,%1,%2";
+ else
+ return "vmulouh %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_smult_even_v8hi"
@@ -1005,7 +1030,12 @@
(match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULESH))]
"TARGET_ALTIVEC"
- "vmulesh %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulesh %0,%1,%2";
+ else
+ return "vmulosh %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_umult_odd_v16qi"
@@ -1014,7 +1044,12 @@
(match_operand:V16QI 2 "register_operand" "v")]
UNSPEC_VMULOUB))]
"TARGET_ALTIVEC"
- "vmuloub %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmuloub %0,%1,%2";
+ else
+ return "vmuleub %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_smult_odd_v16qi"
@@ -1023,7 +1058,12 @@
(match_operand:V16QI 2 "register_operand" "v")]
UNSPEC_VMULOSB))]
"TARGET_ALTIVEC"
- "vmulosb %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulosb %0,%1,%2";
+ else
+ return "vmulesb %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_umult_odd_v8hi"
@@ -1032,7 +1072,12 @@
(match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULOUH))]
"TARGET_ALTIVEC"
- "vmulouh %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulouh %0,%1,%2";
+ else
+ return "vmuleuh %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_smult_odd_v8hi"
@@ -1041,7 +1086,12 @@
(match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULOSH))]
"TARGET_ALTIVEC"
- "vmulosh %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulosh %0,%1,%2";
+ else
+ return "vmulesh %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])