* [PATCH, rs6000] (1/3) Reverse meanings of multiply even/odd for little endian
@ 2013-11-04 5:28 Bill Schmidt
2013-11-05 1:15 ` Bill Schmidt
0 siblings, 1 reply; 3+ messages in thread
From: Bill Schmidt @ 2013-11-04 5:28 UTC (permalink / raw)
To: gcc-patches; +Cc: dje.gcc
Hi,
This patch reverses the meanings of multiply even/odd instructions for
little endian. Since these instructions use a big-endian idea of
evenness/oddness, the nominal meanings of the instructions is wrong for
little endian.
Bootstrapped and tested with the rest of the patch set on
powerpc64{,le}-unknown-linux-gnu with no regressions. Ok for trunk?
Thanks,
Bill
2013-11-03 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
* config/rs6000/altivec.md (vec_widen_umult_even_v16qi): Swap
meanings of even and odd multiplies for little endian.
(vec_widen_smult_even_v16qi): Likewise.
(vec_widen_umult_even_v8hi): Likewise.
(vec_widen_smult_even_v8hi): Likewise.
(vec_widen_umult_odd_v16qi): Likewise.
(vec_widen_smult_odd_v16qi): Likewise.
(vec_widen_umult_odd_v8hi): Likewise.
(vec_widen_smult_odd_v8hi): Likewise.
Index: gcc/config/rs6000/altivec.md
===================================================================
--- gcc/config/rs6000/altivec.md (revision 204192)
+++ gcc/config/rs6000/altivec.md (working copy)
@@ -978,7 +988,12 @@
(match_operand:V16QI 2 "register_operand" "v")]
UNSPEC_VMULEUB))]
"TARGET_ALTIVEC"
- "vmuleub %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmuleub %0,%1,%2";
+ else
+ return "vmuloub %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_smult_even_v16qi"
@@ -987,7 +1002,12 @@
(match_operand:V16QI 2 "register_operand" "v")]
UNSPEC_VMULESB))]
"TARGET_ALTIVEC"
- "vmulesb %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulesb %0,%1,%2";
+ else
+ return "vmulosb %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_umult_even_v8hi"
@@ -996,7 +1016,12 @@
(match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULEUH))]
"TARGET_ALTIVEC"
- "vmuleuh %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmuleuh %0,%1,%2";
+ else
+ return "vmulouh %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_smult_even_v8hi"
@@ -1005,7 +1030,12 @@
(match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULESH))]
"TARGET_ALTIVEC"
- "vmulesh %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulesh %0,%1,%2";
+ else
+ return "vmulosh %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_umult_odd_v16qi"
@@ -1014,7 +1044,12 @@
(match_operand:V16QI 2 "register_operand" "v")]
UNSPEC_VMULOUB))]
"TARGET_ALTIVEC"
- "vmuloub %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmuloub %0,%1,%2";
+ else
+ return "vmuleub %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_smult_odd_v16qi"
@@ -1023,7 +1058,12 @@
(match_operand:V16QI 2 "register_operand" "v")]
UNSPEC_VMULOSB))]
"TARGET_ALTIVEC"
- "vmulosb %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulosb %0,%1,%2";
+ else
+ return "vmulesb %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_umult_odd_v8hi"
@@ -1032,7 +1072,12 @@
(match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULOUH))]
"TARGET_ALTIVEC"
- "vmulouh %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulouh %0,%1,%2";
+ else
+ return "vmuleuh %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
(define_insn "vec_widen_smult_odd_v8hi"
@@ -1041,7 +1086,12 @@
(match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULOSH))]
"TARGET_ALTIVEC"
- "vmulosh %0,%1,%2"
+{
+ if (BYTES_BIG_ENDIAN)
+ return "vmulosh %0,%1,%2";
+ else
+ return "vmulesh %0,%1,%2";
+}
[(set_attr "type" "veccomplex")])
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH, rs6000] (1/3) Reverse meanings of multiply even/odd for little endian
2013-11-04 5:28 [PATCH, rs6000] (1/3) Reverse meanings of multiply even/odd for little endian Bill Schmidt
@ 2013-11-05 1:15 ` Bill Schmidt
2013-11-06 2:45 ` David Edelsohn
0 siblings, 1 reply; 3+ messages in thread
From: Bill Schmidt @ 2013-11-05 1:15 UTC (permalink / raw)
To: gcc-patches; +Cc: dje.gcc
Hi,
Here's a new version of this patch, revised according to Richard
Sandiford's suggestions. Unfortunately the diffing is a little bit ugly
for this version.
Bootstrapped and tested on powerpc64{,le}-unknown-linux-gnu with no
regressions. Is this ok for trunk?
Thanks,
Bill
2013-11-04 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
* config/rs6000/altivec.md (vec_widen_umult_even_v16qi): Change
define_insn to define_expand that uses even patterns for big
endian and odd patterns for little endian.
(vec_widen_smult_even_v16qi): Likewise.
(vec_widen_umult_even_v8hi): Likewise.
(vec_widen_smult_even_v8hi): Likewise.
(vec_widen_umult_odd_v16qi): Likewise.
(vec_widen_smult_odd_v16qi): Likewise.
(vec_widen_umult_odd_v8hi): Likewise.
(vec_widen_smult_odd_v8hi): Likewise.
(altivec_vmuleub): New define_insn.
(altivec_vmuloub): Likewise.
(altivec_vmulesb): Likewise.
(altivec_vmulosb): Likewise.
(altivec_vmuleuh): Likewise.
(altivec_vmulouh): Likewise.
(altivec_vmulesh): Likewise.
(altivec_vmulosh): Likewise.
Index: gcc/config/rs6000/altivec.md
===================================================================
--- gcc/config/rs6000/altivec.md (revision 204350)
+++ gcc/config/rs6000/altivec.md (working copy)
@@ -972,7 +977,111 @@
"vmrgow %0,%1,%2"
[(set_attr "type" "vecperm")])
-(define_insn "vec_widen_umult_even_v16qi"
+(define_expand "vec_widen_umult_even_v16qi"
+ [(use (match_operand:V8HI 0 "register_operand" ""))
+ (use (match_operand:V16QI 1 "register_operand" ""))
+ (use (match_operand:V16QI 2 "register_operand" ""))]
+ "TARGET_ALTIVEC"
+{
+ if (BYTES_BIG_ENDIAN)
+ emit_insn (gen_altivec_vmuleub (operands[0], operands[1], operands[2]));
+ else
+ emit_insn (gen_altivec_vmuloub (operands[0], operands[1], operands[2]));
+ DONE;
+})
+
+(define_expand "vec_widen_smult_even_v16qi"
+ [(use (match_operand:V8HI 0 "register_operand" ""))
+ (use (match_operand:V16QI 1 "register_operand" ""))
+ (use (match_operand:V16QI 2 "register_operand" ""))]
+ "TARGET_ALTIVEC"
+{
+ if (BYTES_BIG_ENDIAN)
+ emit_insn (gen_altivec_vmulesb (operands[0], operands[1], operands[2]));
+ else
+ emit_insn (gen_altivec_vmulosb (operands[0], operands[1], operands[2]));
+ DONE;
+})
+
+(define_expand "vec_widen_umult_even_v8hi"
+ [(use (match_operand:V4SI 0 "register_operand" ""))
+ (use (match_operand:V8HI 1 "register_operand" ""))
+ (use (match_operand:V8HI 2 "register_operand" ""))]
+ "TARGET_ALTIVEC"
+{
+ if (BYTES_BIG_ENDIAN)
+ emit_insn (gen_altivec_vmuleuh (operands[0], operands[1], operands[2]));
+ else
+ emit_insn (gen_altivec_vmulouh (operands[0], operands[1], operands[2]));
+ DONE;
+})
+
+(define_expand "vec_widen_smult_even_v8hi"
+ [(use (match_operand:V4SI 0 "register_operand" ""))
+ (use (match_operand:V8HI 1 "register_operand" ""))
+ (use (match_operand:V8HI 2 "register_operand" ""))]
+ "TARGET_ALTIVEC"
+{
+ if (BYTES_BIG_ENDIAN)
+ emit_insn (gen_altivec_vmulesh (operands[0], operands[1], operands[2]));
+ else
+ emit_insn (gen_altivec_vmulosh (operands[0], operands[1], operands[2]));
+ DONE;
+})
+
+(define_expand "vec_widen_umult_odd_v16qi"
+ [(use (match_operand:V8HI 0 "register_operand" ""))
+ (use (match_operand:V16QI 1 "register_operand" ""))
+ (use (match_operand:V16QI 2 "register_operand" ""))]
+ "TARGET_ALTIVEC"
+{
+ if (BYTES_BIG_ENDIAN)
+ emit_insn (gen_altivec_vmuloub (operands[0], operands[1], operands[2]));
+ else
+ emit_insn (gen_altivec_vmuleub (operands[0], operands[1], operands[2]));
+ DONE;
+})
+
+(define_expand "vec_widen_smult_odd_v16qi"
+ [(use (match_operand:V8HI 0 "register_operand" ""))
+ (use (match_operand:V16QI 1 "register_operand" ""))
+ (use (match_operand:V16QI 2 "register_operand" ""))]
+ "TARGET_ALTIVEC"
+{
+ if (BYTES_BIG_ENDIAN)
+ emit_insn (gen_altivec_vmulosb (operands[0], operands[1], operands[2]));
+ else
+ emit_insn (gen_altivec_vmulesb (operands[0], operands[1], operands[2]));
+ DONE;
+})
+
+(define_expand "vec_widen_umult_odd_v8hi"
+ [(use (match_operand:V4SI 0 "register_operand" ""))
+ (use (match_operand:V8HI 1 "register_operand" ""))
+ (use (match_operand:V8HI 2 "register_operand" ""))]
+ "TARGET_ALTIVEC"
+{
+ if (BYTES_BIG_ENDIAN)
+ emit_insn (gen_altivec_vmulouh (operands[0], operands[1], operands[2]));
+ else
+ emit_insn (gen_altivec_vmuleuh (operands[0], operands[1], operands[2]));
+ DONE;
+})
+
+(define_expand "vec_widen_smult_odd_v8hi"
+ [(use (match_operand:V4SI 0 "register_operand" ""))
+ (use (match_operand:V8HI 1 "register_operand" ""))
+ (use (match_operand:V8HI 2 "register_operand" ""))]
+ "TARGET_ALTIVEC"
+{
+ if (BYTES_BIG_ENDIAN)
+ emit_insn (gen_altivec_vmulosh (operands[0], operands[1], operands[2]));
+ else
+ emit_insn (gen_altivec_vmulesh (operands[0], operands[1], operands[2]));
+ DONE;
+})
+
+(define_insn "altivec_vmuleub"
[(set (match_operand:V8HI 0 "register_operand" "=v")
(unspec:V8HI [(match_operand:V16QI 1 "register_operand" "v")
(match_operand:V16QI 2 "register_operand" "v")]
@@ -981,43 +1090,25 @@
"vmuleub %0,%1,%2"
[(set_attr "type" "veccomplex")])
-(define_insn "vec_widen_smult_even_v16qi"
+(define_insn "altivec_vmuloub"
[(set (match_operand:V8HI 0 "register_operand" "=v")
(unspec:V8HI [(match_operand:V16QI 1 "register_operand" "v")
(match_operand:V16QI 2 "register_operand" "v")]
- UNSPEC_VMULESB))]
+ UNSPEC_VMULOUB))]
"TARGET_ALTIVEC"
- "vmulesb %0,%1,%2"
+ "vmuloub %0,%1,%2"
[(set_attr "type" "veccomplex")])
-(define_insn "vec_widen_umult_even_v8hi"
- [(set (match_operand:V4SI 0 "register_operand" "=v")
- (unspec:V4SI [(match_operand:V8HI 1 "register_operand" "v")
- (match_operand:V8HI 2 "register_operand" "v")]
- UNSPEC_VMULEUH))]
- "TARGET_ALTIVEC"
- "vmuleuh %0,%1,%2"
- [(set_attr "type" "veccomplex")])
-
-(define_insn "vec_widen_smult_even_v8hi"
- [(set (match_operand:V4SI 0 "register_operand" "=v")
- (unspec:V4SI [(match_operand:V8HI 1 "register_operand" "v")
- (match_operand:V8HI 2 "register_operand" "v")]
- UNSPEC_VMULESH))]
- "TARGET_ALTIVEC"
- "vmulesh %0,%1,%2"
- [(set_attr "type" "veccomplex")])
-
-(define_insn "vec_widen_umult_odd_v16qi"
+(define_insn "altivec_vmulesb"
[(set (match_operand:V8HI 0 "register_operand" "=v")
(unspec:V8HI [(match_operand:V16QI 1 "register_operand" "v")
(match_operand:V16QI 2 "register_operand" "v")]
- UNSPEC_VMULOUB))]
+ UNSPEC_VMULESB))]
"TARGET_ALTIVEC"
- "vmuloub %0,%1,%2"
+ "vmulesb %0,%1,%2"
[(set_attr "type" "veccomplex")])
-(define_insn "vec_widen_smult_odd_v16qi"
+(define_insn "altivec_vmulosb"
[(set (match_operand:V8HI 0 "register_operand" "=v")
(unspec:V8HI [(match_operand:V16QI 1 "register_operand" "v")
(match_operand:V16QI 2 "register_operand" "v")]
@@ -1026,19 +1117,37 @@
"vmulosb %0,%1,%2"
[(set_attr "type" "veccomplex")])
-(define_insn "vec_widen_umult_odd_v8hi"
+(define_insn "altivec_vmuleuh"
[(set (match_operand:V4SI 0 "register_operand" "=v")
(unspec:V4SI [(match_operand:V8HI 1 "register_operand" "v")
(match_operand:V8HI 2 "register_operand" "v")]
+ UNSPEC_VMULEUH))]
+ "TARGET_ALTIVEC"
+ "vmuleuh %0,%1,%2"
+ [(set_attr "type" "veccomplex")])
+
+(define_insn "altivec_vmulouh"
+ [(set (match_operand:V4SI 0 "register_operand" "=v")
+ (unspec:V4SI [(match_operand:V8HI 1 "register_operand" "v")
+ (match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULOUH))]
"TARGET_ALTIVEC"
"vmulouh %0,%1,%2"
[(set_attr "type" "veccomplex")])
-(define_insn "vec_widen_smult_odd_v8hi"
+(define_insn "altivec_vmulesh"
[(set (match_operand:V4SI 0 "register_operand" "=v")
(unspec:V4SI [(match_operand:V8HI 1 "register_operand" "v")
(match_operand:V8HI 2 "register_operand" "v")]
+ UNSPEC_VMULESH))]
+ "TARGET_ALTIVEC"
+ "vmulesh %0,%1,%2"
+ [(set_attr "type" "veccomplex")])
+
+(define_insn "altivec_vmulosh"
+ [(set (match_operand:V4SI 0 "register_operand" "=v")
+ (unspec:V4SI [(match_operand:V8HI 1 "register_operand" "v")
+ (match_operand:V8HI 2 "register_operand" "v")]
UNSPEC_VMULOSH))]
"TARGET_ALTIVEC"
"vmulosh %0,%1,%2"
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH, rs6000] (1/3) Reverse meanings of multiply even/odd for little endian
2013-11-05 1:15 ` Bill Schmidt
@ 2013-11-06 2:45 ` David Edelsohn
0 siblings, 0 replies; 3+ messages in thread
From: David Edelsohn @ 2013-11-06 2:45 UTC (permalink / raw)
To: Bill Schmidt; +Cc: GCC Patches
On Mon, Nov 4, 2013 at 7:28 PM, Bill Schmidt
<wschmidt@linux.vnet.ibm.com> wrote:
> Hi,
>
> Here's a new version of this patch, revised according to Richard
> Sandiford's suggestions. Unfortunately the diffing is a little bit ugly
> for this version.
>
> Bootstrapped and tested on powerpc64{,le}-unknown-linux-gnu with no
> regressions. Is this ok for trunk?
>
> Thanks,
> Bill
>
>
> 2013-11-04 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
>
> * config/rs6000/altivec.md (vec_widen_umult_even_v16qi): Change
> define_insn to define_expand that uses even patterns for big
> endian and odd patterns for little endian.
> (vec_widen_smult_even_v16qi): Likewise.
> (vec_widen_umult_even_v8hi): Likewise.
> (vec_widen_smult_even_v8hi): Likewise.
> (vec_widen_umult_odd_v16qi): Likewise.
> (vec_widen_smult_odd_v16qi): Likewise.
> (vec_widen_umult_odd_v8hi): Likewise.
> (vec_widen_smult_odd_v8hi): Likewise.
> (altivec_vmuleub): New define_insn.
> (altivec_vmuloub): Likewise.
> (altivec_vmulesb): Likewise.
> (altivec_vmulosb): Likewise.
> (altivec_vmuleuh): Likewise.
> (altivec_vmulouh): Likewise.
> (altivec_vmulesh): Likewise.
> (altivec_vmulosh): Likewise.
Okay.
Unfortunately there is no way to avoid an ugly solution.
Thanks, David
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2013-11-06 2:42 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-04 5:28 [PATCH, rs6000] (1/3) Reverse meanings of multiply even/odd for little endian Bill Schmidt
2013-11-05 1:15 ` Bill Schmidt
2013-11-06 2:45 ` David Edelsohn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).