public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [RFA] PowerPC e5500 and e6500 cores support
@ 2012-03-06 17:45 Edmar
  2012-05-17 22:16 ` Michael Meissner
  0 siblings, 1 reply; 10+ messages in thread
From: Edmar @ 2012-03-06 17:45 UTC (permalink / raw)
  To: gcc-patches

[-- Attachment #1: Type: text/plain, Size: 8790 bytes --]

Freescale would like to contribute these patches to gcc.

It enables gcc for the new Freescale 64 bit cores. It creates a pipeline
description,  and set proper default flags for the e5500 and e6500 cores.

Both are 64 bit cores capable to execute popcntb/w/d, bperm, cmpb,
and prtyw/d instructions.

The e6500 core has Altivec and also the new Altivec instructions that
will be part of Power ISA-2.07.
Several tests cases for the new altivec builtins are included.

The patch was generated from subversion revision 184757.

The patch was regression tested for power7 target under these conditions:
--enable-checking --disable-decimal-float --enable-languages=c,c++,fortran

During the development process, an ICE for cell target was found.
The e6500 patch also fixes that problem.

Since the cell ICE is an regression, I have a separate patch and
ChangeLog that can be applied against gcc-4.7/4.6/4.5 to fix this ICE only.
(The branches were also regression tested using the same conditions above)

Regarding the implementation of popcntb, popcntd, and cmpb. Gcc has
dedicated masks on target_flags for them, but due to shortage of bits,
those masks controls more than the name implies.

TARGET_POPCNTB also controls FP reciprocal estimate that the Frescale 
cores does not have
TARGET_POPCNTD also controls FP double word conversion, lfiwzx that the 
Freescale cores does not have.
TARGET_CMPB also controls copy sign, lfiwax that the Freescale cores 
does not have.

In the patch I minimized the number of changes, while not adding
any new mask to target_flags.

A new attribute type "popcnt" is created. This is used in our scheduler,
since it takes 2 cycles on Freescale cores. The scheduler of current
architectures are not affected, because the default value of popcnt type
is the same as not having a type definition on a define_insn.

We thanks in advance for your time to review and commit these patches

Regards,
Edmar

2012-03-01  Edmar Wienskoskiedmar@freescale.com

	* config.gcc (cpu_is_64bit): Add new cores e5500, e6500.
	(powerpc*-*-*): Add new cores e5500, e6500.
	* config/rs6000/rs6000-cpus.def: Add cpu definitions for e5500 and
	e6500.
	* config/rs6000/e5500.md: New file.
	* config/rs6000/e6500.md: New file.
	* config/rs6000/rs6000.opt: Add new option for altivec2.
	* config/rs6000/rs6000-opt.h (processor_type): Add
	PROCESSOR_PPCE5500 and PROCESSOR_PPCE6500.
	* config/rs6000/rs6000.h (ASM_CPU_SPEC): Add e5500 and e6500.
	(TARGET_LFIWAX): Exclude e5500 and e6500.
	(TARGET_LFIWZX): Ditto.
	(TARGET_FCFIDS): Re-maps to TARGET_LFIWZX.
	(TARGET_FCFIDU): Ditto.
	(TARGET_FCFIDUS): Ditto.
	(TARGET_FCTIDUZ): Ditto.
	(TARGET_FCTIWUZ): Ditto.
	(TARGET_FRE): Exclude e5500 and e6500.
	(TARGET_FRSQRTES): Ditto.
	(RS6000_BTM_ALTIVEC2): New.
	(RS6000_BTM_COMMON): Add RS6000_BTM_ALTIVEC2.
	* config/rs6000/rs6000.md (define_attr "type"): New type popcnt.
	(define_attr "cpu"): Add ppce5500 and ppce6500.
	Include e5500.md and e6500.md.
	(popcntb<mode>2): Add attribute type popcnt.
	(popcntd<mode>2): Ditto.
	(copysign<mode>3): Re-maps to TARGET_LFIWAX.
	(copysign<mode>3_fcpsgn): Ditto.
	* config/rs6000/rs6000.c (processor_costs): Add new costs for
	e5500 and e6500.
	(POWERPC_MASKS): Add new mask for altivec2.
	(rs6000_builtin_mask_calculate): Add new builtin mask for
	altivec2.
	(rs6000_option_override_internal): Altivec and Spe options not
	allowed with e5500. Spe options not allowed with e6500. Increase
	move inline limit for e5500 and e6500. Disable fsqrt instructions
	for e5500 and e6500. Disable mfocr instruction for e5500. Disable
	string instructions for e5500 and e6500. Enable branch targets
	alignment for e5500 and e6500. Initialize rs6000_cost for e5500
	and e6500.
	(altivec_expand_builtin): Add store vector expansion cases for
	stvexbx, stvexhx, stvexwx, stvflx, stvflxl, stvfrx, stvfrxl,
	stvswx, stvswxl. Add load vector expansion cases for lvexbx,
	lvexhx, lvexwx, lvtlx, lvtlxl, lvtrx, lvtrxl, lvswx, lvswxl, lvsm.
	(altivec_init_builtins): Add builtin and override builtin
	definitions for lvexbx, lvexhx, lvexwx, stvexbx, stvexhx, stvexwx,
	lvtlx, lvtlxl, lvtrx, lvtrxl, stvflx, stvflxl, stvfrx, stvfrxl,
	lvswx, lvswxl, lvsm, stvswx, stvswxl.
	(builtin_function_type): Set unsigned type flags for vabsdub,
	vabsduh, vabsduw.
	(rs6000_adjust_cost): Add extra scheduling cycles between compare
	and brnach for e5500 and e6500.
	(rs6000_issue_rate): Set issue rate for e5500 and e6500.
	(rs6000_builtin_mask_names): Add entry for altivec2 mask.
	* config/rs6000/altivec.md (unspec): New unspecs: UNSPEC_LVEX,
	UNSPEC_STVEX, UNSPEC_LVTLX, UNSPEC_LVTLXL, UNSPEC_LVTRX,
	UNSPEC_LVTRXL, UNSPEC_STVFLX, UNSPEC_STVFLXL, UNSPEC_STVFRX,
	UNSPEC_STVFRXL, UNSPEC_LVSWX, UNSPEC_LVSWXL, UNSPEC_STVSWX,
	UNSPEC_STVSWXL, UNSPEC_LVSM, UNSPEC_VABSDUB, UNSPEC_VABSDUH,
	UNSPEC_VABSDUW.
	(altivec_vabsduw): New altivec2 insn. Use new unspec.
	(altivec_vabsduh): Ditto.
	(altivec_vabsdub): Ditto.
	(altivec_lvex<VI_char>x): Ditto.
	(altivec_stvex<VI_char>x): Ditto.
	(altivec_lvtlx): Ditto.
	(altivec_lvtlxl): Ditto.
	(altivec_lvtrx): Ditto.
	(altivec_lvtrxl): Ditto.
	(altivec_stvflx): Ditto.
	(altivec_stvflxl): Ditto.
	(altivec_stvfrx): Ditto.
	(altivec_stvfrxl): Ditto.
	(altivec_lvswx): Ditto.
	(altivec_lvswxl): Ditto.
	(altivec_lvsm): Ditto.
	(altivec_stvswx): Ditto.
	(altivec_stvswxl): Ditto.
	(altivec_stvlx): Change machine mode of operands.
	(altivec_stvlxl): Ditto.
	(altivec_stvrx): Ditto.
	(altivec_stvrxl): Ditto.
	* config/rs6000/altivec.h: New altivec2 synonyms.
	* config/rs6000/rs6000-builtin.def: New altivec2 convenience
	macros. Use macros to enter new altivec2 builtin definitions and
	overload builtin definitions for VABSDUB, VABSDUH, VABSDUW,
	LVEXBX, LVEXHX, LVEXWX, LVTLX, LVTLXL, LVTRX, LVTRXL, LVSWX,
	LVSWXL, LVSM, STVEXBX, STVEXHX, STVEXWX, STVFLX, STVFLXL, STVFRX,
	STVFRXL, STVSWX, STVSWXL.
	* config/rs6000/rs6000-c.c (rs6000_target_modify_macros): New
	macro definition for altivec2.
	(altivec_overloaded_builtins): New entries for altivec2 overloads.
	* doc/extend.texi: Document altivec2 builtins.
	* doc/invoke.texi: Add altivec2 PowerPC option.
	(mpopcntb): Document e5500, e6500 implementations.
	(mpopcntd): Document float point conversion instruction and e5500,
	e6500 implementations.
	(mcmpb): Document copy sign instruction and e5500, e6500
	implementations.
	(item -mcpu): Add e5500 and e6500 to list of cpus. Document
	altivec2 PowerPC option.
	(item -maltivec2): New.


2012-03-01  Edmar Wienskoskiedmar@freescale.com

	* gcc.dg/tree-ssa/vector-3.c: Adjust regular expression.
	* gcc.target/powerpc/altivec2_builtin_1.c: New test case.
	* gcc.target/powerpc/altivec2_builtin_2.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_3.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_4.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_5.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_6.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_7.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_8.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_9.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_10.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_11.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_12.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_13.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_14.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_15.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_16.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_17.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_18.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_19.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_20.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_21.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_22.c: Ditto.
	* gcc.target/powerpc/cell_builtin_1.c: Ditto.
	* gcc.target/powerpc/cell_builtin_2.c: Ditto.
	* gcc.target/powerpc/cell_builtin_3.c: Ditto.
	* gcc.target/powerpc/cell_builtin_4.c: Ditto.
	* gcc.target/powerpc/cell_builtin_5.c: Ditto.
	* gcc.target/powerpc/cell_builtin_6.c: Ditto.
	* gcc.target/powerpc/cell_builtin_7.c: Ditto.
	* gcc.target/powerpc/cell_builtin_8.c: Ditto.


ChangeLogs for Cell patch and test cases to be applied on branches 4.7/4.6/4.5

2012-03-01  Edmar Wienskoskiedmar@freescale.com

	* config/rs6000/altivec.md (altivec_stvlx): Change machine mode of
	operands.
	(altivec_stvlxl): Ditto.
	(altivec_stvrx): Ditto.
	(altivec_stvrxl): Ditto.

2012-03-01  Edmar Wienskoskiedmar@freescale.com

	* gcc.target/powerpc/cell_builtin_1.c: Ditto.
	* gcc.target/powerpc/cell_builtin_2.c: Ditto.
	* gcc.target/powerpc/cell_builtin_3.c: Ditto.
	* gcc.target/powerpc/cell_builtin_4.c: Ditto.
	* gcc.target/powerpc/cell_builtin_5.c: Ditto.
	* gcc.target/powerpc/cell_builtin_6.c: Ditto.
	* gcc.target/powerpc/cell_builtin_7.c: Ditto.
	* gcc.target/powerpc/cell_builtin_8.c: Ditto.



[-- Attachment #2: sub_e6500-gcc.diff --]
[-- Type: text/x-patch, Size: 205207 bytes --]

diff -ruN gcc-20120223-orig/gcc/config/rs6000/altivec.h gcc-20120223/gcc/config/rs6000/altivec.h
--- gcc-20120223-orig/gcc/config/rs6000/altivec.h	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/altivec.h	2012-02-23 17:25:17.191039017 -0600
@@ -322,6 +322,30 @@
 #define vec_vsx_st __builtin_vec_vsx_st
 #endif
 
+#ifdef __ALTIVEC2__
+/* New Altivec instructions */
+#define vec_absd __builtin_vec_absd
+#define vec_lvexbx __builtin_vec_lvexbx
+#define vec_lvexhx __builtin_vec_lvexhx
+#define vec_lvexwx __builtin_vec_lvexwx
+#define vec_stvexbx __builtin_vec_stvexbx
+#define vec_stvexhx __builtin_vec_stvexhx
+#define vec_stvexwx __builtin_vec_stvexwx
+#define vec_lvswx __builtin_vec_lvswx
+#define vec_lvswxl __builtin_vec_lvswxl
+#define vec_stvswx __builtin_vec_stvswx
+#define vec_stvswxl __builtin_vec_stvswxl
+#define vec_lvsm __builtin_vec_lvsm
+#define vec_lvtlx __builtin_vec_lvtlx
+#define vec_lvtlxl __builtin_vec_lvtlxl
+#define vec_lvtrx __builtin_vec_lvtrx
+#define vec_lvtrxl __builtin_vec_lvtrxl
+#define vec_stvflx __builtin_vec_stvflx
+#define vec_stvflxl __builtin_vec_stvflxl
+#define vec_stvfrx __builtin_vec_stvfrx
+#define vec_stvfrxl __builtin_vec_stvfrxl
+#endif
+
 /* Predicates.
    For C++, we use templates in order to allow non-parenthesized arguments.
    For C, instead, we use macros since non-parenthesized arguments were
diff -ruN gcc-20120223-orig/gcc/config/rs6000/altivec.md gcc-20120223/gcc/config/rs6000/altivec.md
--- gcc-20120223-orig/gcc/config/rs6000/altivec.md	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/altivec.md	2012-02-24 13:18:06.612039003 -0600
@@ -85,9 +85,11 @@
    UNSPEC_LVSL
    UNSPEC_LVSR
    UNSPEC_LVE
+   UNSPEC_LVEX
    UNSPEC_STVX
    UNSPEC_STVXL
    UNSPEC_STVE
+   UNSPEC_STVEX
    UNSPEC_SET_VSCR
    UNSPEC_GET_VRSAVE
    UNSPEC_LVX
@@ -115,6 +117,19 @@
    UNSPEC_STVLXL
    UNSPEC_STVRX
    UNSPEC_STVRXL
+   UNSPEC_LVTLX
+   UNSPEC_LVTLXL
+   UNSPEC_LVTRX
+   UNSPEC_LVTRXL
+   UNSPEC_STVFLX
+   UNSPEC_STVFLXL
+   UNSPEC_STVFRX
+   UNSPEC_STVFRXL
+   UNSPEC_LVSWX
+   UNSPEC_LVSWXL
+   UNSPEC_STVSWX
+   UNSPEC_STVSWXL
+   UNSPEC_LVSM
    UNSPEC_VMULWHUB
    UNSPEC_VMULWLUB
    UNSPEC_VMULWHSB
@@ -135,6 +150,9 @@
    UNSPEC_VUPKLS_V4SF
    UNSPEC_VUPKHU_V4SF
    UNSPEC_VUPKLU_V4SF
+   UNSPEC_VABSDUB
+   UNSPEC_VABSDUH
+   UNSPEC_VABSDUW
 ])
 
 (define_c_enum "unspecv"
@@ -315,6 +333,34 @@
 
 ;; Simple binary operations.
 
+;; absd
+(define_insn "altivec_vabsduw"
+  [(set (match_operand:V4SI 0 "register_operand" "=v")
+        (unspec:V4SI [(match_operand:V4SI 1 "register_operand" "v")
+                      (match_operand:V4SI 2 "register_operand" "v")]
+		     UNSPEC_VABSDUW))]
+  "TARGET_ALTIVEC2"
+  "vabsduw %0,%1,%2"
+  [(set_attr "type" "vecsimple")])
+
+(define_insn "altivec_vabsduh"
+  [(set (match_operand:V8HI 0 "register_operand" "=v")
+        (unspec:V8HI [(match_operand:V8HI 1 "register_operand" "v")
+                      (match_operand:V8HI 2 "register_operand" "v")]
+		     UNSPEC_VABSDUH))]
+  "TARGET_ALTIVEC2"
+  "vabsduh %0,%1,%2"
+  [(set_attr "type" "vecsimple")])
+
+(define_insn "altivec_vabsdub"
+  [(set (match_operand:V16QI 0 "register_operand" "=v")
+        (unspec:V16QI [(match_operand:V16QI 1 "register_operand" "v")
+                       (match_operand:V16QI 2 "register_operand" "v")]
+		      UNSPEC_VABSDUB))]
+  "TARGET_ALTIVEC2"
+  "vabsdub %0,%1,%2"
+  [(set_attr "type" "vecsimple")])
+
 ;; add
 (define_insn "add<mode>3"
   [(set (match_operand:VI 0 "register_operand" "=v")
@@ -1665,6 +1711,15 @@
   "lvewx %0,%y1"
   [(set_attr "type" "vecload")])
 
+(define_insn "altivec_lvex<VI_char>x"
+  [(parallel
+    [(set (match_operand:VI 0 "register_operand" "=v")
+	  (match_operand:VI 1 "memory_operand" "Z"))
+     (unspec [(const_int 0)] UNSPEC_LVEX)])]
+  "TARGET_ALTIVEC2"
+  "lvex<VI_char>x %0,%y1"
+  [(set_attr "type" "vecload")])
+
 (define_insn "altivec_lvxl"
   [(parallel
     [(set (match_operand:V4SI 0 "register_operand" "=v")
@@ -1715,6 +1770,13 @@
   "stvewx %1,%y0"
   [(set_attr "type" "vecstore")])
 
+(define_insn "altivec_stvex<VI_char>x"
+  [(set (match_operand:<VI_scalar> 0 "memory_operand" "=Z")
+	(unspec:<VI_scalar> [(match_operand:VI 1 "register_operand" "v")] UNSPEC_STVEX))]
+  "TARGET_ALTIVEC2"
+  "stvex<VI_char>x %1,%y0"
+  [(set_attr "type" "vecstore")])
+
 ;; Generate
 ;;    vspltis? SCRATCH0,0
 ;;    vsubu?m SCRATCH2,SCRATCH1,%1
@@ -2318,8 +2380,8 @@
 
 (define_insn "altivec_stvlx"
   [(parallel
-    [(set (match_operand:V4SI 0 "memory_operand" "=Z")
-	  (match_operand:V4SI 1 "register_operand" "v"))
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
      (unspec [(const_int 0)] UNSPEC_STVLX)])]
   "TARGET_ALTIVEC && rs6000_cpu == PROCESSOR_CELL"
   "stvlx %1,%y0"
@@ -2327,8 +2389,8 @@
 
 (define_insn "altivec_stvlxl"
   [(parallel
-    [(set (match_operand:V4SI 0 "memory_operand" "=Z")
-	  (match_operand:V4SI 1 "register_operand" "v"))
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
      (unspec [(const_int 0)] UNSPEC_STVLXL)])]
   "TARGET_ALTIVEC && rs6000_cpu == PROCESSOR_CELL"
   "stvlxl %1,%y0"
@@ -2336,8 +2398,8 @@
 
 (define_insn "altivec_stvrx"
   [(parallel
-    [(set (match_operand:V4SI 0 "memory_operand" "=Z")
-	  (match_operand:V4SI 1 "register_operand" "v"))
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
      (unspec [(const_int 0)] UNSPEC_STVRX)])]
   "TARGET_ALTIVEC && rs6000_cpu == PROCESSOR_CELL"
   "stvrx %1,%y0"
@@ -2345,13 +2407,123 @@
 
 (define_insn "altivec_stvrxl"
   [(parallel
-    [(set (match_operand:V4SI 0 "memory_operand" "=Z")
-	  (match_operand:V4SI 1 "register_operand" "v"))
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
      (unspec [(const_int 0)] UNSPEC_STVRXL)])]
   "TARGET_ALTIVEC && rs6000_cpu == PROCESSOR_CELL"
   "stvrxl %1,%y0"
   [(set_attr "type" "vecstore")])
 
+(define_insn "altivec_lvtlx"
+  [(set (match_operand:V16QI 0 "register_operand" "=v")
+        (unspec:V16QI [(match_operand 1 "memory_operand" "Z")] 
+		      UNSPEC_LVTLX))]
+  "TARGET_ALTIVEC2"
+  "lvtlx %0,%y1"
+  [(set_attr "type" "vecload")])
+
+(define_insn "altivec_lvtlxl"
+  [(set (match_operand:V16QI 0 "register_operand" "=v")
+        (unspec:V16QI [(match_operand 1 "memory_operand" "Z")] 
+		      UNSPEC_LVTLXL))]
+  "TARGET_ALTIVEC2"
+  "lvtlxl %0,%y1"
+  [(set_attr "type" "vecload")])
+
+(define_insn "altivec_lvtrx"
+  [(set (match_operand:V16QI 0 "register_operand" "=v")
+        (unspec:V16QI [(match_operand 1 "memory_operand" "Z")] 
+		      UNSPEC_LVTRX))]
+  "TARGET_ALTIVEC2"
+  "lvtrx %0,%y1"
+  [(set_attr "type" "vecload")])
+
+(define_insn "altivec_lvtrxl"
+  [(set (match_operand:V16QI 0 "register_operand" "=v")
+        (unspec:V16QI [(match_operand 1 "memory_operand" "Z")] 
+		      UNSPEC_LVTRXL))]
+  "TARGET_ALTIVEC2"
+  "lvtrxl %0,%y1"
+  [(set_attr "type" "vecload")])
+
+(define_insn "altivec_stvflx"
+  [(parallel
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
+     (unspec [(const_int 0)] UNSPEC_STVFLX)])]
+  "TARGET_ALTIVEC2"
+  "stvflx %1,%y0"
+  [(set_attr "type" "vecstore")])
+
+(define_insn "altivec_stvflxl"
+  [(parallel
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
+     (unspec [(const_int 0)] UNSPEC_STVFLXL)])]
+  "TARGET_ALTIVEC2"
+  "stvflxl %1,%y0"
+  [(set_attr "type" "vecstore")])
+
+(define_insn "altivec_stvfrx"
+  [(parallel
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
+     (unspec [(const_int 0)] UNSPEC_STVFRX)])]
+  "TARGET_ALTIVEC2"
+  "stvfrx %1,%y0"
+  [(set_attr "type" "vecstore")])
+
+(define_insn "altivec_stvfrxl"
+  [(parallel
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
+     (unspec [(const_int 0)] UNSPEC_STVFRXL)])]
+  "TARGET_ALTIVEC2"
+  "stvfrxl %1,%y0"
+  [(set_attr "type" "vecstore")])
+
+(define_insn "altivec_lvswx"
+  [(set (match_operand:V16QI 0 "register_operand" "=v")
+        (unspec:V16QI [(match_operand 1 "memory_operand" "Z")] 
+		      UNSPEC_LVSWX))]
+  "TARGET_ALTIVEC2"
+  "lvswx %0,%y1"
+  [(set_attr "type" "vecload")])
+
+(define_insn "altivec_lvswxl"
+  [(set (match_operand:V16QI 0 "register_operand" "=v")
+        (unspec:V16QI [(match_operand 1 "memory_operand" "Z")] 
+		      UNSPEC_LVSWXL))]
+  "TARGET_ALTIVEC2"
+  "lvswxl %0,%y1"
+  [(set_attr "type" "vecload")])
+
+(define_insn "altivec_lvsm"
+  [(set (match_operand:V16QI 0 "register_operand" "=v")
+        (unspec:V16QI [(match_operand 1 "memory_operand" "Z")] 
+		      UNSPEC_LVSM))]
+  "TARGET_ALTIVEC2"
+  "lvsm %0,%y1"
+  [(set_attr "type" "vecload")])
+
+(define_insn "altivec_stvswx"
+  [(parallel
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
+     (unspec [(const_int 0)] UNSPEC_STVSWX)])]
+  "TARGET_ALTIVEC2"
+  "stvswx %1,%y0"
+  [(set_attr "type" "vecstore")])
+
+(define_insn "altivec_stvswxl"
+  [(parallel
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
+     (unspec [(const_int 0)] UNSPEC_STVSWXL)])]
+  "TARGET_ALTIVEC2"
+  "stvswxl %1,%y0"
+  [(set_attr "type" "vecstore")])
+
 (define_expand "vec_unpacks_float_hi_v8hi"
  [(set (match_operand:V4SF 0 "register_operand" "")
         (unspec:V4SF [(match_operand:V8HI 1 "register_operand" "")]
diff -ruN gcc-20120223-orig/gcc/config/rs6000/e5500.md gcc-20120223/gcc/config/rs6000/e5500.md
--- gcc-20120223-orig/gcc/config/rs6000/e5500.md	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/e5500.md	2012-03-02 13:43:37.838039002 -0600
@@ -0,0 +1,176 @@
+;; Pipeline description for Freescale PowerPC e5500 core.
+;;   Copyright (C) 2012 Free Software Foundation, Inc.
+;;   Contributed by Edmar Wienskoski (edmar@freescale.com)
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify it
+;; under the terms of the GNU General Public License as published
+;; by the Free Software Foundation; either version 3, or (at your
+;; option) any later version.
+;;
+;; GCC is distributed in the hope that it will be useful, but WITHOUT
+;; ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+;; or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+;; License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+;;
+;; e5500 64-bit SFX(2), CFX, LSU, FPU, BU
+;; Max issue 3 insns/clock cycle (includes 1 branch)
+
+(define_automaton "e5500_most,e5500_long")
+(define_cpu_unit "e5500_decode_0,e5500_decode_1" "e5500_most")
+
+;; SFX.
+(define_cpu_unit "e5500_sfx_0,e5500_sfx_1" "e5500_most")
+
+;; CFX.
+(define_cpu_unit "e5500_cfx_stage0,e5500_cfx_stage1" "e5500_most")
+
+;; Non-pipelined division.
+(define_cpu_unit "e5500_cfx_div" "e5500_long")
+
+;; LSU.
+(define_cpu_unit "e5500_lsu" "e5500_most")
+
+;; FPU.
+(define_cpu_unit "e5500_fpu" "e5500_long")
+
+;; BU.
+(define_cpu_unit "e5500_bu" "e5500_most")
+
+;; The following units are used to make the automata deterministic.
+(define_cpu_unit "present_e5500_decode_0" "e5500_most")
+(define_cpu_unit "present_e5500_sfx_0" "e5500_most")
+(presence_set "present_e5500_decode_0" "e5500_decode_0")
+(presence_set "present_e5500_sfx_0" "e5500_sfx_0")
+
+;; Some useful abbreviations.
+(define_reservation "e5500_decode"
+    "e5500_decode_0|e5500_decode_1+present_e5500_decode_0")
+(define_reservation "e5500_sfx"
+   "e5500_sfx_0|e5500_sfx_1+present_e5500_sfx_0")
+
+;; SFX.
+(define_insn_reservation "e5500_sfx" 1
+  (and (eq_attr "type" "integer,insert_word,insert_dword,delayed_compare,\
+	shift,cntlz,exts")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_sfx")
+
+(define_insn_reservation "e5500_sfx2" 2
+  (and (eq_attr "type" "cmp,compare,fast_compare,trap")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_sfx")
+
+(define_insn_reservation "e5500_delayed" 2
+  (and (eq_attr "type" "var_shift_rotate,var_delayed_compare,popcnt")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_sfx*2")
+
+(define_insn_reservation "e5500_two" 2
+  (and (eq_attr "type" "two")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_decode+e5500_sfx,e5500_sfx")
+
+(define_insn_reservation "e5500_three" 3
+  (and (eq_attr "type" "three")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,(e5500_decode+e5500_sfx)*2,e5500_sfx")
+
+;; SFX - Mfcr.
+(define_insn_reservation "e5500_mfcr" 4
+  (and (eq_attr "type" "mfcr")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_sfx_0*4")
+
+;; SFX - Mtcrf.
+(define_insn_reservation "e5500_mtcrf" 1
+  (and (eq_attr "type" "mtcr")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_sfx_0")
+
+;; SFX - Mtjmpr.
+(define_insn_reservation "e5500_mtjmpr" 1
+  (and (eq_attr "type" "mtjmpr,mfjmpr")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_sfx")
+
+;; CFX - Multiply.
+(define_insn_reservation "e5500_multiply" 4
+  (and (eq_attr "type" "imul")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_cfx_stage0,e5500_cfx_stage1")
+
+(define_insn_reservation "e5500_multiply_i" 5
+  (and (eq_attr "type" "imul2,imul3,imul_compare")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_cfx_stage0,\
+   e5500_cfx_stage0+e5500_cfx_stage1,e5500_cfx_stage1")
+
+;; CFX - Divide.
+(define_insn_reservation "e5500_divide" 16
+  (and (eq_attr "type" "idiv")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_cfx_stage0+e5500_cfx_div,\
+   e5500_cfx_div*15")
+
+(define_insn_reservation "e5500_divide_d" 26
+  (and (eq_attr "type" "ldiv")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_cfx_stage0+e5500_cfx_div,\
+   e5500_cfx_div*25")
+
+;; LSU - Loads.
+(define_insn_reservation "e5500_load" 3
+  (and (eq_attr "type" "load,load_ext,load_ext_u,load_ext_ux,load_ux,load_u,\
+			load_l,sync")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_lsu")
+
+(define_insn_reservation "e5500_fpload" 4
+  (and (eq_attr "type" "fpload,fpload_ux,fpload_u")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_lsu")
+
+;; LSU - Stores.
+(define_insn_reservation "e5500_store" 3
+  (and (eq_attr "type" "store,store_ux,store_u,store_c")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_lsu")
+
+(define_insn_reservation "e5500_fpstore" 3
+  (and (eq_attr "type" "fpstore,fpstore_ux,fpstore_u")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_lsu")
+
+;; FP.
+(define_insn_reservation "e5500_float" 7
+  (and (eq_attr "type" "fpsimple,fp,fpcompare,dmul")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_fpu")
+
+(define_insn_reservation "e5500_sdiv" 20
+  (and (eq_attr "type" "sdiv")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_fpu*20")
+
+(define_insn_reservation "e5500_ddiv" 35
+  (and (eq_attr "type" "ddiv")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_fpu*35")
+
+;; BU.
+(define_insn_reservation "e5500_branch" 1
+  (and (eq_attr "type" "jmpreg,branch,isync")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_bu")
+
+;; BU - CR logical.
+(define_insn_reservation "e5500_cr_logical" 1
+  (and (eq_attr "type" "cr_logical,delayed_cr")
+       (eq_attr "cpu" "ppce5500"))
+  "e5500_decode,e5500_bu")
diff -ruN gcc-20120223-orig/gcc/config/rs6000/e6500.md gcc-20120223/gcc/config/rs6000/e6500.md
--- gcc-20120223-orig/gcc/config/rs6000/e6500.md	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/e6500.md	2012-03-02 13:43:44.813039002 -0600
@@ -0,0 +1,213 @@
+;; Pipeline description for Freescale PowerPC e6500 core.
+;;   Copyright (C) 2012 Free Software Foundation, Inc.
+;;   Contributed by Edmar Wienskoski (edmar@freescale.com)
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify it
+;; under the terms of the GNU General Public License as published
+;; by the Free Software Foundation; either version 3, or (at your
+;; option) any later version.
+;;
+;; GCC is distributed in the hope that it will be useful, but WITHOUT
+;; ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+;; or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+;; License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+;;
+;; e6500 64-bit SFX(2), CFX, LSU, FPU, BU, VSFX, VCFX, VFPU, VPERM
+;; Max issue 3 insns/clock cycle (includes 1 branch)
+
+(define_automaton "e6500_most,e6500_long,e6500_vec")
+(define_cpu_unit "e6500_decode_0,e6500_decode_1" "e6500_most")
+
+;; SFX.
+(define_cpu_unit "e6500_sfx_0,e6500_sfx_1" "e6500_most")
+
+;; CFX.
+(define_cpu_unit "e6500_cfx_stage0,e6500_cfx_stage1" "e6500_most")
+
+;; Non-pipelined division.
+(define_cpu_unit "e6500_cfx_div" "e6500_long")
+
+;; LSU.
+(define_cpu_unit "e6500_lsu" "e6500_most")
+
+;; FPU.
+(define_cpu_unit "e6500_fpu" "e6500_long")
+
+;; BU.
+(define_cpu_unit "e6500_bu" "e6500_most")
+
+;; Altivec unit
+(define_cpu_unit "e6500_vec,e6500_vecperm" "e6500_vec")
+
+;; The following units are used to make the automata deterministic.
+(define_cpu_unit "present_e6500_decode_0" "e6500_most")
+(define_cpu_unit "present_e6500_sfx_0" "e6500_most")
+(presence_set "present_e6500_decode_0" "e6500_decode_0")
+(presence_set "present_e6500_sfx_0" "e6500_sfx_0")
+
+;; Some useful abbreviations.
+(define_reservation "e6500_decode"
+    "e6500_decode_0|e6500_decode_1+present_e6500_decode_0")
+(define_reservation "e6500_sfx"
+   "e6500_sfx_0|e6500_sfx_1+present_e6500_sfx_0")
+
+;; SFX.
+(define_insn_reservation "e6500_sfx" 1
+  (and (eq_attr "type" "integer,insert_word,insert_dword,delayed_compare,\
+	shift,cntlz,exts")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_sfx")
+
+(define_insn_reservation "e6500_sfx2" 2
+  (and (eq_attr "type" "cmp,compare,fast_compare,trap")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_sfx")
+
+(define_insn_reservation "e6500_delayed" 2
+  (and (eq_attr "type" "var_shift_rotate,var_delayed_compare,popcnt")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_sfx*2")
+
+(define_insn_reservation "e6500_two" 2
+  (and (eq_attr "type" "two")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_decode+e6500_sfx,e6500_sfx")
+
+(define_insn_reservation "e6500_three" 3
+  (and (eq_attr "type" "three")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,(e6500_decode+e6500_sfx)*2,e6500_sfx")
+
+;; SFX - Mfcr.
+(define_insn_reservation "e6500_mfcr" 4
+  (and (eq_attr "type" "mfcr")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_sfx_0*4")
+
+;; SFX - Mtcrf.
+(define_insn_reservation "e6500_mtcrf" 1
+  (and (eq_attr "type" "mtcr")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_sfx_0")
+
+;; SFX - Mtjmpr.
+(define_insn_reservation "e6500_mtjmpr" 1
+  (and (eq_attr "type" "mtjmpr,mfjmpr")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_sfx")
+
+;; CFX - Multiply.
+(define_insn_reservation "e6500_multiply" 4
+  (and (eq_attr "type" "imul")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_cfx_stage0,e6500_cfx_stage1")
+
+(define_insn_reservation "e6500_multiply_i" 5
+  (and (eq_attr "type" "imul2,imul3,imul_compare")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_cfx_stage0,\
+   e6500_cfx_stage0+e6500_cfx_stage1,e6500_cfx_stage1")
+
+;; CFX - Divide.
+(define_insn_reservation "e6500_divide" 16
+  (and (eq_attr "type" "idiv")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_cfx_stage0+e6500_cfx_div,\
+   e6500_cfx_div*15")
+
+(define_insn_reservation "e6500_divide_d" 26
+  (and (eq_attr "type" "ldiv")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_cfx_stage0+e6500_cfx_div,\
+   e6500_cfx_div*25")
+
+;; LSU - Loads.
+(define_insn_reservation "e6500_load" 3
+  (and (eq_attr "type" "load,load_ext,load_ext_u,load_ext_ux,load_ux,load_u,\
+			load_l,sync")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_lsu")
+
+(define_insn_reservation "e6500_fpload" 4
+  (and (eq_attr "type" "fpload,fpload_ux,fpload_u")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_lsu")
+
+(define_insn_reservation "e6500_vecload" 4
+  (and (eq_attr "type" "vecload")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_lsu")
+
+;; LSU - Stores.
+(define_insn_reservation "e6500_store" 3
+  (and (eq_attr "type" "store,store_ux,store_u,store_c")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_lsu")
+
+(define_insn_reservation "e6500_fpstore" 3
+  (and (eq_attr "type" "fpstore,fpstore_ux,fpstore_u")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_lsu")
+
+(define_insn_reservation "e6500_vecstore" 4
+  (and (eq_attr "type" "vecstore")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_lsu")
+
+;; FP.
+(define_insn_reservation "e6500_float" 7
+  (and (eq_attr "type" "fpsimple,fp,fpcompare,dmul")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_fpu")
+
+(define_insn_reservation "e6500_sdiv" 20
+  (and (eq_attr "type" "sdiv")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_fpu*20")
+
+(define_insn_reservation "e6500_ddiv" 35
+  (and (eq_attr "type" "ddiv")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_fpu*35")
+
+;; BU.
+(define_insn_reservation "e6500_branch" 1
+  (and (eq_attr "type" "jmpreg,branch,isync")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_bu")
+
+;; BU - CR logical.
+(define_insn_reservation "e6500_cr_logical" 1
+  (and (eq_attr "type" "cr_logical,delayed_cr")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_bu")
+
+;; VSFX.
+(define_insn_reservation "e6500_vecsimple" 1
+  (and (eq_attr "type" "vecsimple,veccmp")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_vec")
+
+;; VCFX.
+(define_insn_reservation "e6500_veccomplex" 4
+  (and (eq_attr "type" "veccomplex")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_vec")
+
+;; VFPU.
+(define_insn_reservation "e6500_vecfloat" 6
+  (and (eq_attr "type" "vecfloat")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_vec")
+
+;; VPERM.
+(define_insn_reservation "e6500_vecperm" 2
+  (and (eq_attr "type" "vecperm")
+       (eq_attr "cpu" "ppce6500"))
+  "e6500_decode,e6500_vecperm")
diff -ruN gcc-20120223-orig/gcc/config/rs6000/rs6000-builtin.def gcc-20120223/gcc/config/rs6000/rs6000-builtin.def
--- gcc-20120223-orig/gcc/config/rs6000/rs6000-builtin.def	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/rs6000-builtin.def	2012-02-24 11:20:50.303038974 -0600
@@ -398,6 +398,43 @@
 		    MASK,				/* MASK */	\
 		    (ATTR | RS6000_BTC_SPECIAL),	/* ATTR */	\
 		    CODE_FOR_nothing)			/* ICODE */
+
+/* Power ISA 2.07 altivec */
+#define BU_ALTIVC2_2(ENUM, NAME, ATTR, ICODE)				\
+  RS6000_BUILTIN_2 (ALTIVEC_BUILTIN_ ## ENUM,		/* ENUM */	\
+		    "__builtin_altivec_" NAME,		/* NAME */	\
+		    (RS6000_BTM_ALTIVEC			/* MASK */	\
+		     | RS6000_BTM_ALTIVEC2),				\
+		    (RS6000_BTC_ ## ATTR		/* ATTR */	\
+		     | RS6000_BTC_BINARY),				\
+		    CODE_FOR_ ## ICODE)			/* ICODE */
+
+#define BU_ALTIVC2_X(ENUM, NAME, ATTR)					\
+  RS6000_BUILTIN_X (ALTIVEC_BUILTIN_ ## ENUM,		/* ENUM */	\
+		    "__builtin_altivec_" NAME,		/* NAME */	\
+		    (RS6000_BTM_ALTIVEC			/* MASK */	\
+		     | RS6000_BTM_ALTIVEC2),				\
+		    (RS6000_BTC_ ## ATTR		/* ATTR */	\
+		     | RS6000_BTC_SPECIAL),				\
+		    CODE_FOR_nothing)			/* ICODE */
+
+#define BU_ALTIVC2_OVERLOAD_2(ENUM, NAME)				\
+  RS6000_BUILTIN_2 (ALTIVEC_BUILTIN_VEC_ ## ENUM,	/* ENUM */	\
+		    "__builtin_vec_" NAME,		/* NAME */	\
+		    (RS6000_BTM_ALTIVEC			/* MASK */	\
+		     | RS6000_BTM_ALTIVEC2),				\
+		    (RS6000_BTC_OVERLOADED		/* ATTR */	\
+		     | RS6000_BTC_BINARY),				\
+		    CODE_FOR_nothing)			/* ICODE */
+
+#define BU_ALTIVC2_OVERLOAD_X(ENUM, NAME)				\
+  RS6000_BUILTIN_X (ALTIVEC_BUILTIN_VEC_ ## ENUM,	/* ENUM */	\
+		    "__builtin_vec_" NAME,		/* NAME */	\
+		    (RS6000_BTM_ALTIVEC			/* MASK */	\
+		     | RS6000_BTM_ALTIVEC2),				\
+		    (RS6000_BTC_OVERLOADED		/* ATTR */	\
+		     | RS6000_BTC_SPECIAL),				\
+		    CODE_FOR_nothing)			/* ICODE */
 #endif
 
 /* Insure 0 is not a legitimate index.  */
@@ -447,6 +484,9 @@
 BU_ALTIVEC_D (DSTSTT,	      "dststt",		MISC,  	altivec_dststt)
 
 /* Altivec 2 argument builtin functions.  */
+BU_ALTIVC2_2 (VABSDUB,        "vabsdub",	CONST,	altivec_vabsdub)
+BU_ALTIVC2_2 (VABSDUH,        "vabsduh",	CONST,	altivec_vabsduh)
+BU_ALTIVC2_2 (VABSDUW,        "vabsduw",	CONST,	altivec_vabsduw)
 BU_ALTIVEC_2 (VADDUBM,        "vaddubm",	CONST,	addv16qi3)
 BU_ALTIVEC_2 (VADDUHM,	      "vadduhm",	CONST,	addv8hi3)
 BU_ALTIVEC_2 (VADDUWM,	      "vadduwm",	CONST,	addv4si3)
@@ -636,6 +676,9 @@
 BU_ALTIVEC_X (LVEBX,		"lvebx",	    MEM)
 BU_ALTIVEC_X (LVEHX,		"lvehx",	    MEM)
 BU_ALTIVEC_X (LVEWX,		"lvewx",	    MEM)
+BU_ALTIVC2_X (LVEXBX,		"lvexbx",	    MEM)
+BU_ALTIVC2_X (LVEXHX,		"lvexhx",	    MEM)
+BU_ALTIVC2_X (LVEXWX,		"lvexwx",	    MEM)
 BU_ALTIVEC_X (LVXL,		"lvxl",		    MEM)
 BU_ALTIVEC_X (LVX,		"lvx",		    MEM)
 BU_ALTIVEC_X (STVX,		"stvx",		    MEM)
@@ -643,14 +686,30 @@
 BU_ALTIVEC_C (LVLXL,		"lvlxl",	    MEM)
 BU_ALTIVEC_C (LVRX,		"lvrx",		    MEM)
 BU_ALTIVEC_C (LVRXL,		"lvrxl",	    MEM)
+BU_ALTIVC2_X (LVTLX,		"lvtlx",	    MEM)
+BU_ALTIVC2_X (LVTLXL,		"lvtlxl",	    MEM)
+BU_ALTIVC2_X (LVTRX,		"lvtrx",	    MEM)
+BU_ALTIVC2_X (LVTRXL,		"lvtrxl",	    MEM)
+BU_ALTIVC2_X (LVSWX,		"lvswx",	    MEM)
+BU_ALTIVC2_X (LVSWXL,		"lvswxl",	    MEM)
+BU_ALTIVC2_X (LVSM,		"lvsm",		    MEM)
 BU_ALTIVEC_X (STVEBX,		"stvebx",	    MEM)
 BU_ALTIVEC_X (STVEHX,		"stvehx",	    MEM)
 BU_ALTIVEC_X (STVEWX,		"stvewx",	    MEM)
+BU_ALTIVC2_X (STVEXBX,		"stvexbx",	    MEM)
+BU_ALTIVC2_X (STVEXHX,		"stvexhx",	    MEM)
+BU_ALTIVC2_X (STVEXWX,		"stvexwx",	    MEM)
 BU_ALTIVEC_X (STVXL,		"stvxl",	    MEM)
 BU_ALTIVEC_C (STVLX,		"stvlx",	    MEM)
 BU_ALTIVEC_C (STVLXL,		"stvlxl",	    MEM)
 BU_ALTIVEC_C (STVRX,		"stvrx",	    MEM)
 BU_ALTIVEC_C (STVRXL,		"stvrxl",	    MEM)
+BU_ALTIVC2_X (STVFLX,		"stvflx",	    MEM)
+BU_ALTIVC2_X (STVFLXL,		"stvflxl",	    MEM)
+BU_ALTIVC2_X (STVFRX,		"stvfrx",	    MEM)
+BU_ALTIVC2_X (STVFRXL,		"stvfrxl",	    MEM)
+BU_ALTIVC2_X (STVSWX,		"stvswx",	    MEM)
+BU_ALTIVC2_X (STVSWXL,		"stvswxl",	    MEM)
 BU_ALTIVEC_X (MASK_FOR_LOAD,	"mask_for_load",    MISC)
 BU_ALTIVEC_X (MASK_FOR_STORE,	"mask_for_store",   MISC)
 BU_ALTIVEC_X (VEC_INIT_V4SI,	"vec_init_v4si",    CONST)
@@ -695,6 +754,10 @@
 BU_ALTIVEC_OVERLOAD_D (DSTSTT,	   "dststt")
 
 /* 2 argument Altivec overloaded builtins.  */
+BU_ALTIVC2_OVERLOAD_2 (ABSD,	   "absd")
+BU_ALTIVC2_OVERLOAD_2 (ABSDUB,	   "absdub")
+BU_ALTIVC2_OVERLOAD_2 (ABSDUH,	   "absduh")
+BU_ALTIVC2_OVERLOAD_2 (ABSDUW,	   "absduw")
 BU_ALTIVEC_OVERLOAD_2 (ADD,	   "add")
 BU_ALTIVEC_OVERLOAD_2 (ADDC,	   "addc")
 BU_ALTIVEC_OVERLOAD_2 (ADDS,	   "adds")
@@ -867,10 +930,20 @@
 BU_ALTIVEC_OVERLOAD_X (LVEBX,	   "lvebx")
 BU_ALTIVEC_OVERLOAD_X (LVEHX,	   "lvehx")
 BU_ALTIVEC_OVERLOAD_X (LVEWX,	   "lvewx")
+BU_ALTIVC2_OVERLOAD_X (LVEXBX,	   "lvexbx")
+BU_ALTIVC2_OVERLOAD_X (LVEXHX,	   "lvexhx")
+BU_ALTIVC2_OVERLOAD_X (LVEXWX,	   "lvexwx")
 BU_ALTIVEC_OVERLOAD_X (LVLX,	   "lvlx")
 BU_ALTIVEC_OVERLOAD_X (LVLXL,	   "lvlxl")
 BU_ALTIVEC_OVERLOAD_X (LVRX,	   "lvrx")
 BU_ALTIVEC_OVERLOAD_X (LVRXL,	   "lvrxl")
+BU_ALTIVC2_OVERLOAD_X (LVTLX,	   "lvtlx")
+BU_ALTIVC2_OVERLOAD_X (LVTLXL,	   "lvtlxl")
+BU_ALTIVC2_OVERLOAD_X (LVTRX,	   "lvtrx")
+BU_ALTIVC2_OVERLOAD_X (LVTRXL,	   "lvtrxl")
+BU_ALTIVC2_OVERLOAD_X (LVSWX,	   "lvswx")
+BU_ALTIVC2_OVERLOAD_X (LVSWXL,	   "lvswxl")
+BU_ALTIVC2_OVERLOAD_X (LVSM,	   "lvsm")
 BU_ALTIVEC_OVERLOAD_X (LVSL,	   "lvsl")
 BU_ALTIVEC_OVERLOAD_X (LVSR,	   "lvsr")
 BU_ALTIVEC_OVERLOAD_X (PROMOTE,	   "promote")
@@ -884,10 +957,19 @@
 BU_ALTIVEC_OVERLOAD_X (STVEBX,	   "stvebx")
 BU_ALTIVEC_OVERLOAD_X (STVEHX,	   "stvehx")
 BU_ALTIVEC_OVERLOAD_X (STVEWX,	   "stvewx")
+BU_ALTIVC2_OVERLOAD_X (STVEXBX,	   "stvexbx")
+BU_ALTIVC2_OVERLOAD_X (STVEXHX,	   "stvexhx")
+BU_ALTIVC2_OVERLOAD_X (STVEXWX,	   "stvexwx")
 BU_ALTIVEC_OVERLOAD_X (STVLX,	   "stvlx")
 BU_ALTIVEC_OVERLOAD_X (STVLXL,	   "stvlxl")
 BU_ALTIVEC_OVERLOAD_X (STVRX,	   "stvrx")
 BU_ALTIVEC_OVERLOAD_X (STVRXL,	   "stvrxl")
+BU_ALTIVC2_OVERLOAD_X (STVFLX,	   "stvflx")
+BU_ALTIVC2_OVERLOAD_X (STVFLXL,	   "stvflxl")
+BU_ALTIVC2_OVERLOAD_X (STVFRX,	   "stvfrx")
+BU_ALTIVC2_OVERLOAD_X (STVFRXL,	   "stvfrxl")
+BU_ALTIVC2_OVERLOAD_X (STVSWX,	   "stvswx")
+BU_ALTIVC2_OVERLOAD_X (STVSWXL,	   "stvswxl")
 BU_ALTIVEC_OVERLOAD_X (VCFSX,	   "vcfsx")
 BU_ALTIVEC_OVERLOAD_X (VCFUX,	   "vcfux")
 BU_ALTIVEC_OVERLOAD_X (VSPLTB,	   "vspltb")
diff -ruN gcc-20120223-orig/gcc/config/rs6000/rs6000.c gcc-20120223/gcc/config/rs6000/rs6000.c
--- gcc-20120223-orig/gcc/config/rs6000/rs6000.c	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/rs6000.c	2012-03-01 16:04:56.814038995 -0600
@@ -755,6 +755,44 @@
   1,			/* prefetch streams /*/
 };
 
+/* Instruction costs on PPCE5500 processors.  */
+static const
+struct processor_costs ppce5500_cost = {
+  COSTS_N_INSNS (5),    /* mulsi */
+  COSTS_N_INSNS (5),    /* mulsi_const */
+  COSTS_N_INSNS (5),    /* mulsi_const9 */
+  COSTS_N_INSNS (5),    /* muldi */
+  COSTS_N_INSNS (14),   /* divsi */
+  COSTS_N_INSNS (14),   /* divdi */
+  COSTS_N_INSNS (7),    /* fp */
+  COSTS_N_INSNS (10),   /* dmul */
+  COSTS_N_INSNS (36),   /* sdiv */
+  COSTS_N_INSNS (66),   /* ddiv */
+  64,			/* cache line size */
+  32,			/* l1 cache */
+  128,			/* l2 cache */
+  1,			/* prefetch streams /*/
+};
+
+/* Instruction costs on PPCE6500 processors.  */
+static const
+struct processor_costs ppce6500_cost = {
+  COSTS_N_INSNS (5),    /* mulsi */
+  COSTS_N_INSNS (5),    /* mulsi_const */
+  COSTS_N_INSNS (5),    /* mulsi_const9 */
+  COSTS_N_INSNS (5),    /* muldi */
+  COSTS_N_INSNS (14),   /* divsi */
+  COSTS_N_INSNS (14),   /* divdi */
+  COSTS_N_INSNS (7),    /* fp */
+  COSTS_N_INSNS (10),   /* dmul */
+  COSTS_N_INSNS (36),   /* sdiv */
+  COSTS_N_INSNS (66),   /* ddiv */
+  64,			/* cache line size */
+  32,			/* l1 cache */
+  128,			/* l2 cache */
+  1,			/* prefetch streams /*/
+};
+
 /* Instruction costs on AppliedMicro Titan processors.  */
 static const
 struct processor_costs titan_cost = {
@@ -1694,7 +1732,7 @@
 		   | MASK_MFCRF | MASK_POPCNTB | MASK_FPRND | MASK_MULHW
 		   | MASK_DLMZB | MASK_CMPB | MASK_MFPGPR | MASK_DFP
 		   | MASK_POPCNTD | MASK_VSX | MASK_ISEL | MASK_NO_UPDATE
-		   | MASK_RECIP_PRECISION)
+		   | MASK_RECIP_PRECISION | MASK_ALTIVEC2)
 };
 
 /* Masks for instructions set at various powerpc ISAs.  */
@@ -2577,6 +2615,7 @@
 rs6000_builtin_mask_calculate (void)
 {
   return (((TARGET_ALTIVEC)		    ? RS6000_BTM_ALTIVEC  : 0)
+	  | ((TARGET_ALTIVEC2)		    ? RS6000_BTM_ALTIVEC2 : 0)
 	  | ((TARGET_VSX)		    ? RS6000_BTM_VSX	  : 0)
 	  | ((TARGET_SPE)		    ? RS6000_BTM_SPE	  : 0)
 	  | ((TARGET_PAIRED_FLOAT)	    ? RS6000_BTM_PAIRED	  : 0)
@@ -2691,13 +2730,19 @@
 		   : PROCESSOR_DEFAULT));
 
   if (rs6000_cpu == PROCESSOR_PPCE300C2 || rs6000_cpu == PROCESSOR_PPCE300C3
-      || rs6000_cpu == PROCESSOR_PPCE500MC || rs6000_cpu == PROCESSOR_PPCE500MC64)
+      || rs6000_cpu == PROCESSOR_PPCE500MC || rs6000_cpu == PROCESSOR_PPCE500MC64
+      || rs6000_cpu == PROCESSOR_PPCE5500)
     {
       if (TARGET_ALTIVEC)
 	error ("AltiVec not supported in this target");
       if (TARGET_SPE)
 	error ("SPE not supported in this target");
     }
+  if (rs6000_cpu == PROCESSOR_PPCE6500)
+    {
+      if (TARGET_SPE)
+	error ("SPE not supported in this target");
+    }
 
   /* Disable Cell microcode if we are optimizing for the Cell
      and not optimizing for size.  */
@@ -2792,9 +2837,20 @@
      user's opinion, though.  */
   if (rs6000_block_move_inline_limit == 0
       && (rs6000_cpu == PROCESSOR_PPCE500MC
-	  || rs6000_cpu == PROCESSOR_PPCE500MC64))
+	  || rs6000_cpu == PROCESSOR_PPCE500MC64
+	  || rs6000_cpu == PROCESSOR_PPCE5500
+	  || rs6000_cpu == PROCESSOR_PPCE6500))
     rs6000_block_move_inline_limit = 128;
 
+  /* Those machines does not have fsqrt instruction */
+  if (rs6000_cpu == PROCESSOR_PPCE5500
+      || rs6000_cpu == PROCESSOR_PPCE6500)
+    target_flags &= ~(MASK_PPC_GPOPT);
+
+  /* Those machines implements a slow mfocr opcode */
+  if (rs6000_cpu == PROCESSOR_PPCE5500)
+    target_flags &= ~MASK_MFCRF;
+
   /* store_one_arg depends on expand_block_move to handle at least the
      size of reg_parm_stack_space.  */
   if (rs6000_block_move_inline_limit < (TARGET_POWERPC64 ? 64 : 32))
@@ -2926,7 +2982,9 @@
 #endif
 
   if (TARGET_E500 || rs6000_cpu == PROCESSOR_PPCE500MC
-      || rs6000_cpu == PROCESSOR_PPCE500MC64)
+      || rs6000_cpu == PROCESSOR_PPCE500MC64
+      || rs6000_cpu == PROCESSOR_PPCE5500
+      || rs6000_cpu == PROCESSOR_PPCE6500)
     {
       /* The e500 and e500mc do not have string instructions, and we set
 	 MASK_STRING above when optimizing for size.  */
@@ -2974,7 +3032,9 @@
 				 || rs6000_cpu == PROCESSOR_POWER6
 				 || rs6000_cpu == PROCESSOR_POWER7
 				 || rs6000_cpu == PROCESSOR_PPCE500MC
-				 || rs6000_cpu == PROCESSOR_PPCE500MC64);
+				 || rs6000_cpu == PROCESSOR_PPCE500MC64
+				 || rs6000_cpu == PROCESSOR_PPCE5500
+				 || rs6000_cpu == PROCESSOR_PPCE6500);
 
   /* Allow debug switches to override the above settings.  These are set to -1
      in rs6000.opt to indicate the user hasn't directly set the switch.  */
@@ -3196,6 +3256,14 @@
 	rs6000_cost = &ppce500mc64_cost;
 	break;
 
+      case PROCESSOR_PPCE5500:
+	rs6000_cost = &ppce5500_cost;
+	break;
+
+      case PROCESSOR_PPCE6500:
+	rs6000_cost = &ppce6500_cost;
+	break;
+
       case PROCESSOR_TITAN:
 	rs6000_cost = &titan_cost;
 	break;
@@ -10653,6 +10721,12 @@
       return altivec_expand_stv_builtin (CODE_FOR_altivec_stvehx, exp);
     case ALTIVEC_BUILTIN_STVEWX:
       return altivec_expand_stv_builtin (CODE_FOR_altivec_stvewx, exp);
+    case ALTIVEC_BUILTIN_STVEXBX:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvexbx, exp);
+    case ALTIVEC_BUILTIN_STVEXHX:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvexhx, exp);
+    case ALTIVEC_BUILTIN_STVEXWX:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvexwx, exp);
     case ALTIVEC_BUILTIN_STVXL:
       return altivec_expand_stv_builtin (CODE_FOR_altivec_stvxl, exp);
 
@@ -10664,6 +10738,18 @@
       return altivec_expand_stv_builtin (CODE_FOR_altivec_stvrx, exp);
     case ALTIVEC_BUILTIN_STVRXL:
       return altivec_expand_stv_builtin (CODE_FOR_altivec_stvrxl, exp);
+    case ALTIVEC_BUILTIN_STVFLX:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvflx, exp);
+    case ALTIVEC_BUILTIN_STVFLXL:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvflxl, exp);
+    case ALTIVEC_BUILTIN_STVFRX:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvfrx, exp);
+    case ALTIVEC_BUILTIN_STVFRXL:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvfrxl, exp);
+    case ALTIVEC_BUILTIN_STVSWX:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvswx, exp);
+    case ALTIVEC_BUILTIN_STVSWXL:
+      return altivec_expand_stv_builtin (CODE_FOR_altivec_stvswxl, exp);
 
     case VSX_BUILTIN_STXVD2X_V2DF:
       return altivec_expand_stv_builtin (CODE_FOR_vsx_store_v2df, exp);
@@ -10798,6 +10884,15 @@
     case ALTIVEC_BUILTIN_LVEWX:
       return altivec_expand_lv_builtin (CODE_FOR_altivec_lvewx,
 					exp, target, false);
+    case ALTIVEC_BUILTIN_LVEXBX:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvexbx,
+					exp, target, false);
+    case ALTIVEC_BUILTIN_LVEXHX:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvexhx,
+					exp, target, false);
+    case ALTIVEC_BUILTIN_LVEXWX:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvexwx,
+					exp, target, false);
     case ALTIVEC_BUILTIN_LVXL:
       return altivec_expand_lv_builtin (CODE_FOR_altivec_lvxl,
 					exp, target, false);
@@ -10816,6 +10911,27 @@
     case ALTIVEC_BUILTIN_LVRXL:
       return altivec_expand_lv_builtin (CODE_FOR_altivec_lvrxl,
 					exp, target, true);
+    case ALTIVEC_BUILTIN_LVTLX:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvtlx,
+					exp, target, true);
+    case ALTIVEC_BUILTIN_LVTLXL:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvtlxl,
+					exp, target, true);
+    case ALTIVEC_BUILTIN_LVTRX:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvtrx,
+					exp, target, true);
+    case ALTIVEC_BUILTIN_LVTRXL:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvtrxl,
+					exp, target, true);
+    case ALTIVEC_BUILTIN_LVSWX:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvswx,
+					exp, target, true);
+    case ALTIVEC_BUILTIN_LVSWXL:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvswxl,
+					exp, target, true);
+    case ALTIVEC_BUILTIN_LVSM:
+      return altivec_expand_lv_builtin (CODE_FOR_altivec_lvsm,
+					exp, target, true);
     case VSX_BUILTIN_LXVD2X_V2DF:
       return altivec_expand_lv_builtin (CODE_FOR_vsx_load_v2df,
 					exp, target, false);
@@ -12135,6 +12251,9 @@
   def_builtin ("__builtin_altivec_lvebx", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVEBX);
   def_builtin ("__builtin_altivec_lvehx", v8hi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVEHX);
   def_builtin ("__builtin_altivec_lvewx", v4si_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVEWX);
+  def_builtin ("__builtin_altivec_lvexbx", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVEXBX);
+  def_builtin ("__builtin_altivec_lvexhx", v8hi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVEXHX);
+  def_builtin ("__builtin_altivec_lvexwx", v4si_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVEXWX);
   def_builtin ("__builtin_altivec_lvxl", v4si_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVXL);
   def_builtin ("__builtin_altivec_lvx", v4si_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVX);
   def_builtin ("__builtin_altivec_stvx", void_ftype_v4si_long_pvoid, ALTIVEC_BUILTIN_STVX);
@@ -12142,6 +12261,9 @@
   def_builtin ("__builtin_altivec_stvxl", void_ftype_v4si_long_pvoid, ALTIVEC_BUILTIN_STVXL);
   def_builtin ("__builtin_altivec_stvebx", void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_STVEBX);
   def_builtin ("__builtin_altivec_stvehx", void_ftype_v8hi_long_pvoid, ALTIVEC_BUILTIN_STVEHX);
+  def_builtin ("__builtin_altivec_stvexbx", void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_STVEXBX);
+  def_builtin ("__builtin_altivec_stvexhx", void_ftype_v8hi_long_pvoid, ALTIVEC_BUILTIN_STVEXHX);
+  def_builtin ("__builtin_altivec_stvexwx", void_ftype_v4si_long_pvoid, ALTIVEC_BUILTIN_STVEXWX);
   def_builtin ("__builtin_vec_ld", opaque_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LD);
   def_builtin ("__builtin_vec_lde", opaque_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LDE);
   def_builtin ("__builtin_vec_ldl", opaque_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LDL);
@@ -12150,12 +12272,18 @@
   def_builtin ("__builtin_vec_lvebx", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVEBX);
   def_builtin ("__builtin_vec_lvehx", v8hi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVEHX);
   def_builtin ("__builtin_vec_lvewx", v4si_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVEWX);
+  def_builtin ("__builtin_vec_lvexbx", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVEXBX);
+  def_builtin ("__builtin_vec_lvexhx", v8hi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVEXHX);
+  def_builtin ("__builtin_vec_lvexwx", v4si_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVEXWX);
   def_builtin ("__builtin_vec_st", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_ST);
   def_builtin ("__builtin_vec_ste", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_STE);
   def_builtin ("__builtin_vec_stl", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_STL);
   def_builtin ("__builtin_vec_stvewx", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_STVEWX);
   def_builtin ("__builtin_vec_stvebx", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_STVEBX);
   def_builtin ("__builtin_vec_stvehx", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_STVEHX);
+  def_builtin ("__builtin_vec_stvexwx", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_STVEXWX);
+  def_builtin ("__builtin_vec_stvexbx", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_STVEXBX);
+  def_builtin ("__builtin_vec_stvexhx", void_ftype_opaque_long_pvoid, ALTIVEC_BUILTIN_VEC_STVEXHX);
 
   def_builtin ("__builtin_vsx_lxvd2x_v2df", v2df_ftype_long_pcvoid,
 	       VSX_BUILTIN_LXVD2X_V2DF);
@@ -12186,6 +12314,40 @@
   def_builtin ("__builtin_vec_vsx_st", void_ftype_opaque_long_pvoid,
 	       VSX_BUILTIN_VEC_ST);
 
+  /* Power ISA 2.07 */
+  def_builtin ("__builtin_altivec_lvtlx",  v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVTLX);
+  def_builtin ("__builtin_altivec_lvtlxl", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVTLXL);
+  def_builtin ("__builtin_altivec_lvtrx",  v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVTRX);
+  def_builtin ("__builtin_altivec_lvtrxl", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVTRXL);
+
+  def_builtin ("__builtin_vec_lvtlx",  v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVTLX);
+  def_builtin ("__builtin_vec_lvtlxl", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVTLXL);
+  def_builtin ("__builtin_vec_lvtrx",  v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVTRX);
+  def_builtin ("__builtin_vec_lvtrxl", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVTRXL);
+
+  def_builtin ("__builtin_altivec_stvflx",  void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_STVFLX);
+  def_builtin ("__builtin_altivec_stvflxl", void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_STVFLXL);
+  def_builtin ("__builtin_altivec_stvfrx",  void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_STVFRX);
+  def_builtin ("__builtin_altivec_stvfrxl", void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_STVFRXL);
+
+  def_builtin ("__builtin_vec_stvflx",  void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_VEC_STVFLX);
+  def_builtin ("__builtin_vec_stvflxl", void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_VEC_STVFLXL);
+  def_builtin ("__builtin_vec_stvfrx",  void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_VEC_STVFRX);
+  def_builtin ("__builtin_vec_stvfrxl", void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_VEC_STVFRXL);
+
+  def_builtin ("__builtin_altivec_lvswx",  v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVSWX);
+  def_builtin ("__builtin_altivec_lvswxl", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVSWXL);
+  def_builtin ("__builtin_vec_lvswx",  v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVSWX);
+  def_builtin ("__builtin_vec_lvswxl", v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVSWXL);
+
+  def_builtin ("__builtin_altivec_lvsm",  v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_LVSM);
+  def_builtin ("__builtin_vec_lvsm",  v16qi_ftype_long_pcvoid, ALTIVEC_BUILTIN_VEC_LVSM);
+
+  def_builtin ("__builtin_altivec_stvswx",  void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_STVSWX);
+  def_builtin ("__builtin_altivec_stvswxl", void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_STVSWXL);
+  def_builtin ("__builtin_vec_stvswx",  void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_VEC_STVSWX);
+  def_builtin ("__builtin_vec_stvswxl", void_ftype_v16qi_long_pvoid, ALTIVEC_BUILTIN_VEC_STVSWXL);
+
   def_builtin ("__builtin_vec_step", int_ftype_opaque, ALTIVEC_BUILTIN_VEC_STEP);
   def_builtin ("__builtin_vec_splats", opaque_ftype_opaque, ALTIVEC_BUILTIN_VEC_SPLATS);
   def_builtin ("__builtin_vec_promote", opaque_ftype_opaque, ALTIVEC_BUILTIN_VEC_PROMOTE);
@@ -12489,6 +12651,9 @@
     case ALTIVEC_BUILTIN_VMULEUH_UNS:
     case ALTIVEC_BUILTIN_VMULOUB_UNS:
     case ALTIVEC_BUILTIN_VMULOUH_UNS:
+    case ALTIVEC_BUILTIN_VABSDUB:
+    case ALTIVEC_BUILTIN_VABSDUH:
+    case ALTIVEC_BUILTIN_VABSDUW:
       h.uns_p[0] = 1;
       h.uns_p[1] = 1;
       h.uns_p[2] = 1;
@@ -22280,6 +22445,8 @@
                  || rs6000_cpu_attr == CPU_PPC750
                  || rs6000_cpu_attr == CPU_PPC7400
                  || rs6000_cpu_attr == CPU_PPC7450
+                 || rs6000_cpu_attr == CPU_PPCE5500
+                 || rs6000_cpu_attr == CPU_PPCE6500
                  || rs6000_cpu_attr == CPU_POWER4
                  || rs6000_cpu_attr == CPU_POWER5
 		 || rs6000_cpu_attr == CPU_POWER7
@@ -22824,6 +22991,8 @@
   case CPU_PPCE300C3:
   case CPU_PPCE500MC:
   case CPU_PPCE500MC64:
+  case CPU_PPCE5500:
+  case CPU_PPCE6500:
   case CPU_TITAN:
     return 2;
   case CPU_RIOS2:
@@ -27054,6 +27223,7 @@
 static struct rs6000_opt_mask const rs6000_builtin_mask_names[] =
 {
   { "altivec",		 RS6000_BTM_ALTIVEC,	false, false },
+  { "altivec2",		 RS6000_BTM_ALTIVEC2,	false, false },
   { "vsx",		 RS6000_BTM_VSX,	false, false },
   { "spe",		 RS6000_BTM_SPE,	false, false },
   { "paired",		 RS6000_BTM_PAIRED,	false, false },
diff -ruN gcc-20120223-orig/gcc/config/rs6000/rs6000-c.c gcc-20120223/gcc/config/rs6000/rs6000-c.c
--- gcc-20120223-orig/gcc/config/rs6000/rs6000-c.c	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/rs6000-c.c	2012-02-24 10:50:12.906038982 -0600
@@ -332,6 +332,8 @@
       if (!flag_iso)
 	rs6000_define_or_undefine_macro (define_p, "__APPLE_ALTIVEC__");
     }
+  if ((flags & MASK_ALTIVEC2) != 0)
+    rs6000_define_or_undefine_macro (define_p, "__ALTIVEC2__");
   if ((flags & MASK_VSX) != 0)
     rs6000_define_or_undefine_macro (define_p, "__VSX__");
 
@@ -622,6 +624,24 @@
     RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V16QI, 0, 0 },
 
   /* Binary AltiVec/VSX builtins.  */
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUB,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUB,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUB,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUH,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUH,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUH,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUW,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUW,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_ABSD, ALTIVEC_BUILTIN_VABSDUW,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, 0 },
   { ALTIVEC_BUILTIN_VEC_ADD, ALTIVEC_BUILTIN_VADDUBM,
     RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_V16QI, 0 },
   { ALTIVEC_BUILTIN_VEC_ADD, ALTIVEC_BUILTIN_VADDUBM,
@@ -1137,6 +1157,24 @@
     RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
   { ALTIVEC_BUILTIN_VEC_LVEBX, ALTIVEC_BUILTIN_LVEBX,
     RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXWX, ALTIVEC_BUILTIN_LVEXWX,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXWX, ALTIVEC_BUILTIN_LVEXWX,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXWX, ALTIVEC_BUILTIN_LVEXWX,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXWX, ALTIVEC_BUILTIN_LVEXWX,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_long, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXWX, ALTIVEC_BUILTIN_LVEXWX,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_long, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXHX, ALTIVEC_BUILTIN_LVEXHX,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXHX, ALTIVEC_BUILTIN_LVEXHX,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXBX, ALTIVEC_BUILTIN_LVEXBX,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVEXBX, ALTIVEC_BUILTIN_LVEXBX,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
   { ALTIVEC_BUILTIN_VEC_LDL, ALTIVEC_BUILTIN_LVXL,
     RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF, 0 },
   { ALTIVEC_BUILTIN_VEC_LDL, ALTIVEC_BUILTIN_LVXL,
@@ -1389,6 +1427,258 @@
     RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI, 0 },
   { ALTIVEC_BUILTIN_VEC_LVRXL, ALTIVEC_BUILTIN_LVRXL,
     RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLX, ALTIVEC_BUILTIN_LVTLX,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTLXL, ALTIVEC_BUILTIN_LVTLXL,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRX, ALTIVEC_BUILTIN_LVTRX,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVTRXL, ALTIVEC_BUILTIN_LVTRXL,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWX, ALTIVEC_BUILTIN_LVSWX,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSWXL, ALTIVEC_BUILTIN_LVSWXL,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI, 0 },
+  { ALTIVEC_BUILTIN_VEC_LVSM, ALTIVEC_BUILTIN_LVSM,
+    RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 },
   { ALTIVEC_BUILTIN_VEC_MAX, ALTIVEC_BUILTIN_VMAXUB,
     RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 },
   { ALTIVEC_BUILTIN_VEC_MAX, ALTIVEC_BUILTIN_VMAXUB,
@@ -2865,6 +3155,46 @@
     RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_void },
   { ALTIVEC_BUILTIN_VEC_STVEBX, ALTIVEC_BUILTIN_STVEBX,
     RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_void },
+  { ALTIVEC_BUILTIN_VEC_STVEXWX, ALTIVEC_BUILTIN_STVEXWX,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float },
+  { ALTIVEC_BUILTIN_VEC_STVEXWX, ALTIVEC_BUILTIN_STVEXWX,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI },
+  { ALTIVEC_BUILTIN_VEC_STVEXWX, ALTIVEC_BUILTIN_STVEXWX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI },
+  { ALTIVEC_BUILTIN_VEC_STVEXWX, ALTIVEC_BUILTIN_STVEXWX,
+    RS6000_BTI_void, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI },
+  { ALTIVEC_BUILTIN_VEC_STVEXWX, ALTIVEC_BUILTIN_STVEXWX,
+    RS6000_BTI_void, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI },
+  { ALTIVEC_BUILTIN_VEC_STVEXWX, ALTIVEC_BUILTIN_STVEXWX,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_void },
+  { ALTIVEC_BUILTIN_VEC_STVEXWX, ALTIVEC_BUILTIN_STVEXWX,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_void },
+  { ALTIVEC_BUILTIN_VEC_STVEXWX, ALTIVEC_BUILTIN_STVEXWX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_void },
+  { ALTIVEC_BUILTIN_VEC_STVEXHX, ALTIVEC_BUILTIN_STVEXHX,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI },
+  { ALTIVEC_BUILTIN_VEC_STVEXHX, ALTIVEC_BUILTIN_STVEXHX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI },
+  { ALTIVEC_BUILTIN_VEC_STVEXHX, ALTIVEC_BUILTIN_STVEXHX,
+    RS6000_BTI_void, RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI },
+  { ALTIVEC_BUILTIN_VEC_STVEXHX, ALTIVEC_BUILTIN_STVEXHX,
+    RS6000_BTI_void, RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI },
+  { ALTIVEC_BUILTIN_VEC_STVEXHX, ALTIVEC_BUILTIN_STVEXHX,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_void },
+  { ALTIVEC_BUILTIN_VEC_STVEXHX, ALTIVEC_BUILTIN_STVEXHX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_void },
+  { ALTIVEC_BUILTIN_VEC_STVEXBX, ALTIVEC_BUILTIN_STVEXBX,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI },
+  { ALTIVEC_BUILTIN_VEC_STVEXBX, ALTIVEC_BUILTIN_STVEXBX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
+  { ALTIVEC_BUILTIN_VEC_STVEXBX, ALTIVEC_BUILTIN_STVEXBX,
+    RS6000_BTI_void, RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI },
+  { ALTIVEC_BUILTIN_VEC_STVEXBX, ALTIVEC_BUILTIN_STVEXBX,
+    RS6000_BTI_void, RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
+  { ALTIVEC_BUILTIN_VEC_STVEXBX, ALTIVEC_BUILTIN_STVEXBX,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_void },
+  { ALTIVEC_BUILTIN_VEC_STVEXBX, ALTIVEC_BUILTIN_STVEXBX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_void },
   { ALTIVEC_BUILTIN_VEC_STL, ALTIVEC_BUILTIN_STVXL,
     RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF },
   { ALTIVEC_BUILTIN_VEC_STL, ALTIVEC_BUILTIN_STVXL,
@@ -3069,6 +3399,222 @@
     RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI },
   { ALTIVEC_BUILTIN_VEC_STVRXL, ALTIVEC_BUILTIN_STVRXL,
     RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFLX, ALTIVEC_BUILTIN_STVFLX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFLXL, ALTIVEC_BUILTIN_STVFLXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFRX, ALTIVEC_BUILTIN_STVFRX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVFRXL, ALTIVEC_BUILTIN_STVFRXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVSWX, ALTIVEC_BUILTIN_STVSWX,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_V4SF },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_V4SF, RS6000_BTI_INTSI, ~RS6000_BTI_float },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_INTSI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V4SI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTSI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_pixel_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_pixel_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_INTHI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V8HI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V8HI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTHI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_bool_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_bool_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_INTQI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_unsigned_V16QI },
+  { ALTIVEC_BUILTIN_VEC_STVSWXL, ALTIVEC_BUILTIN_STVSWXL,
+    RS6000_BTI_void, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI },
   { VSX_BUILTIN_VEC_XXSLDWI, VSX_BUILTIN_XXSLDWI_16QI,
     RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_NOT_OPAQUE },
   { VSX_BUILTIN_VEC_XXSLDWI, VSX_BUILTIN_XXSLDWI_16QI,
diff -ruN gcc-20120223-orig/gcc/config/rs6000/rs6000-cpus.def gcc-20120223/gcc/config/rs6000/rs6000-cpus.def
--- gcc-20120223-orig/gcc/config/rs6000/rs6000-cpus.def	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/rs6000-cpus.def	2012-02-23 16:55:59.503038975 -0600
@@ -88,6 +88,12 @@
 	    | MASK_ISEL)
 RS6000_CPU ("e500mc64", PROCESSOR_PPCE500MC64,
 	    POWERPC_BASE_MASK | MASK_POWERPC64 | MASK_PPC_GFXOPT | MASK_ISEL)
+RS6000_CPU ("e5500", PROCESSOR_PPCE5500, POWERPC_BASE_MASK | MASK_POWERPC64
+	    | MASK_PPC_GFXOPT | MASK_ISEL | MASK_CMPB | MASK_POPCNTB
+	    | MASK_POPCNTD)
+RS6000_CPU ("e6500", PROCESSOR_PPCE6500, POWERPC_7400_MASK | MASK_POWERPC64
+	    | MASK_MFCRF | MASK_ISEL | MASK_CMPB | MASK_POPCNTB | MASK_POPCNTD
+	    | MASK_ALTIVEC2)
 RS6000_CPU ("860", PROCESSOR_MPCCORE, POWERPC_BASE_MASK | MASK_SOFT_FLOAT)
 RS6000_CPU ("970", PROCESSOR_POWER4,
 	    POWERPC_7400_MASK | MASK_PPC_GPOPT | MASK_MFCRF | MASK_POWERPC64)
diff -ruN gcc-20120223-orig/gcc/config/rs6000/rs6000.h gcc-20120223/gcc/config/rs6000/rs6000.h
--- gcc-20120223-orig/gcc/config/rs6000/rs6000.h	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/rs6000.h	2012-02-24 11:00:05.479038979 -0600
@@ -168,6 +168,8 @@
 %{mcpu=e300c3: -me300} \
 %{mcpu=e500mc: -me500mc} \
 %{mcpu=e500mc64: -me500mc64} \
+%{mcpu=e5500: -me5500} \
+%{mcpu=e6500: -me6500} \
 %{maltivec: -maltivec} \
 %{mvsx: -mvsx %{!maltivec: -maltivec} %{!mcpu*: %(asm_cpu_power7)}} \
 -many"
@@ -475,13 +477,15 @@
 
 #define TARGET_FCTIDZ	TARGET_FCFID
 #define TARGET_STFIWX	TARGET_PPC_GFXOPT
-#define TARGET_LFIWAX	TARGET_CMPB
-#define TARGET_LFIWZX	TARGET_POPCNTD
-#define TARGET_FCFIDS	TARGET_POPCNTD
-#define TARGET_FCFIDU	TARGET_POPCNTD
-#define TARGET_FCFIDUS	TARGET_POPCNTD
-#define TARGET_FCTIDUZ	TARGET_POPCNTD
-#define TARGET_FCTIWUZ	TARGET_POPCNTD
+#define TARGET_LFIWAX	(TARGET_CMPB && rs6000_cpu != PROCESSOR_PPCE5500 \
+			 && rs6000_cpu != PROCESSOR_PPCE6500)
+#define TARGET_LFIWZX	(TARGET_POPCNTD && rs6000_cpu != PROCESSOR_PPCE5500 \
+			 && rs6000_cpu != PROCESSOR_PPCE6500)
+#define TARGET_FCFIDS	TARGET_LFIWZX
+#define TARGET_FCFIDU	TARGET_LFIWZX
+#define TARGET_FCFIDUS	TARGET_LFIWZX
+#define TARGET_FCTIDUZ	TARGET_LFIWZX
+#define TARGET_FCTIWUZ	TARGET_LFIWZX
 
 /* For power systems, we want to enable Altivec and VSX builtins even if the
    user did not use -maltivec or -mvsx to allow the builtins to be used inside
@@ -510,10 +514,14 @@
 
 #define TARGET_FRE	(TARGET_HARD_FLOAT && TARGET_FPRS \
 			 && TARGET_DOUBLE_FLOAT \
-			 && (TARGET_POPCNTB || VECTOR_UNIT_VSX_P (DFmode)))
+			 && (TARGET_POPCNTB || VECTOR_UNIT_VSX_P (DFmode)) \
+			 && rs6000_cpu != PROCESSOR_PPCE5500 \
+			 && rs6000_cpu != PROCESSOR_PPCE6500)
 
 #define TARGET_FRSQRTES	(TARGET_HARD_FLOAT && TARGET_POPCNTB \
-			 && TARGET_FPRS && TARGET_SINGLE_FLOAT)
+			 && TARGET_FPRS && TARGET_SINGLE_FLOAT \
+			 && rs6000_cpu != PROCESSOR_PPCE5500 \
+			 && rs6000_cpu != PROCESSOR_PPCE6500)
 
 #define TARGET_FRSQRTE	(TARGET_HARD_FLOAT && TARGET_FPRS \
 			 && TARGET_DOUBLE_FLOAT \
@@ -2326,6 +2334,7 @@
    target flags, and pick two random bits for SPE and paired which aren't in
    target_flags.  */
 #define RS6000_BTM_ALTIVEC	MASK_ALTIVEC	/* VMX/altivec vectors.  */
+#define RS6000_BTM_ALTIVEC2	MASK_ALTIVEC2	/* ISA 2.07 altivec vectors.  */
 #define RS6000_BTM_VSX		MASK_VSX	/* VSX (vector/scalar).  */
 #define RS6000_BTM_SPE		MASK_STRING	/* E500 */
 #define RS6000_BTM_PAIRED	MASK_MULHW	/* 750CL paired insns.  */
@@ -2338,6 +2347,7 @@
 #define RS6000_BTM_CELL		MASK_FPRND	/* Target is cell powerpc.  */
 
 #define RS6000_BTM_COMMON	(RS6000_BTM_ALTIVEC			\
+				 | RS6000_BTM_ALTIVEC2			\
 				 | RS6000_BTM_VSX			\
 				 | RS6000_BTM_FRE			\
 				 | RS6000_BTM_FRES			\
diff -ruN gcc-20120223-orig/gcc/config/rs6000/rs6000.md gcc-20120223/gcc/config/rs6000/rs6000.md
--- gcc-20120223-orig/gcc/config/rs6000/rs6000.md	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/rs6000.md	2012-02-23 16:55:59.505039004 -0600
@@ -144,7 +144,7 @@
 \f
 ;; Define an insn type attribute.  This is used in function unit delay
 ;; computations.
-(define_attr "type" "integer,two,three,load,load_ext,load_ext_u,load_ext_ux,load_ux,load_u,store,store_ux,store_u,fpload,fpload_ux,fpload_u,fpstore,fpstore_ux,fpstore_u,vecload,vecstore,imul,imul2,imul3,lmul,idiv,ldiv,insert_word,branch,cmp,fast_compare,compare,var_delayed_compare,delayed_compare,imul_compare,lmul_compare,fpcompare,cr_logical,delayed_cr,mfcr,mfcrf,mtcr,mfjmpr,mtjmpr,fp,fpsimple,dmul,sdiv,ddiv,ssqrt,dsqrt,jmpreg,brinc,vecsimple,veccomplex,vecdiv,veccmp,veccmpsimple,vecperm,vecfloat,vecfdiv,vecdouble,isync,sync,load_l,store_c,shift,trap,insert_dword,var_shift_rotate,cntlz,exts,mffgpr,mftgpr,isel"
+(define_attr "type" "integer,two,three,load,load_ext,load_ext_u,load_ext_ux,load_ux,load_u,store,store_ux,store_u,fpload,fpload_ux,fpload_u,fpstore,fpstore_ux,fpstore_u,vecload,vecstore,imul,imul2,imul3,lmul,idiv,ldiv,insert_word,branch,cmp,fast_compare,compare,var_delayed_compare,delayed_compare,imul_compare,lmul_compare,fpcompare,cr_logical,delayed_cr,mfcr,mfcrf,mtcr,mfjmpr,mtjmpr,fp,fpsimple,dmul,sdiv,ddiv,ssqrt,dsqrt,jmpreg,brinc,vecsimple,veccomplex,vecdiv,veccmp,veccmpsimple,vecperm,vecfloat,vecfdiv,vecdouble,isync,sync,load_l,store_c,shift,trap,insert_dword,var_shift_rotate,cntlz,exts,mffgpr,mftgpr,isel,popcnt"
   (const_string "integer"))
 
 ;; Define floating point instruction sub-types for use with Xfpu.md
@@ -166,7 +166,7 @@
 ;; Processor type -- this attribute must exactly match the processor_type
 ;; enumeration in rs6000.h.
 
-(define_attr "cpu" "rios1,rios2,rs64a,mpccore,ppc403,ppc405,ppc440,ppc476,ppc601,ppc603,ppc604,ppc604e,ppc620,ppc630,ppc750,ppc7400,ppc7450,ppc8540,ppce300c2,ppce300c3,ppce500mc,ppce500mc64,power4,power5,power6,power7,cell,ppca2,titan"
+(define_attr "cpu" "rios1,rios2,rs64a,mpccore,ppc403,ppc405,ppc440,ppc476,ppc601,ppc603,ppc604,ppc604e,ppc620,ppc630,ppc750,ppc7400,ppc7450,ppc8540,ppce300c2,ppce300c3,ppce500mc,ppce500mc64,ppce5500,ppce6500,power4,power5,power6,power7,cell,ppca2,titan"
   (const (symbol_ref "rs6000_cpu_attr")))
 
 
@@ -194,6 +194,8 @@
 (include "e300c2c3.md")
 (include "e500mc.md")
 (include "e500mc64.md")
+(include "e5500.md")
+(include "e6500.md")
 (include "power4.md")
 (include "power5.md")
 (include "power6.md")
@@ -2329,13 +2331,17 @@
         (unspec:GPR [(match_operand:GPR 1 "gpc_reg_operand" "r")]
                      UNSPEC_POPCNTB))]
   "TARGET_POPCNTB"
-  "popcntb %0,%1")
+  "popcntb %0,%1"
+  [(set_attr "length" "4")
+   (set_attr "type" "popcnt")])
 
 (define_insn "popcntd<mode>2"
   [(set (match_operand:GPR 0 "gpc_reg_operand" "=r")
 	(popcount:GPR (match_operand:GPR 1 "gpc_reg_operand" "r")))]
   "TARGET_POPCNTD"
-  "popcnt<wd> %0,%1")
+  "popcnt<wd> %0,%1"
+  [(set_attr "length" "4")
+   (set_attr "type" "popcnt")])
 
 (define_expand "popcount<mode>2"
   [(set (match_operand:GPR 0 "gpc_reg_operand" "")
@@ -5984,10 +5990,10 @@
    && ((TARGET_PPC_GFXOPT
         && !HONOR_NANS (<MODE>mode)
         && !HONOR_SIGNED_ZEROS (<MODE>mode))
-       || TARGET_CMPB
+       || TARGET_LFIWAX
        || VECTOR_UNIT_VSX_P (<MODE>mode))"
 {
-  if (TARGET_CMPB || VECTOR_UNIT_VSX_P (<MODE>mode))
+  if (TARGET_LFIWAX || VECTOR_UNIT_VSX_P (<MODE>mode))
     {
       emit_insn (gen_copysign<mode>3_fcpsgn (operands[0], operands[1],
 					     operands[2]));
@@ -6006,7 +6012,7 @@
 	(unspec:SFDF [(match_operand:SFDF 1 "gpc_reg_operand" "<rreg2>")
 		      (match_operand:SFDF 2 "gpc_reg_operand" "<rreg2>")]
 		     UNSPEC_COPYSIGN))]
-  "TARGET_CMPB && !VECTOR_UNIT_VSX_P (<MODE>mode)"
+  "TARGET_LFIWAX && !VECTOR_UNIT_VSX_P (<MODE>mode)"
   "fcpsgn %0,%2,%1"
   [(set_attr "type" "fp")])
 
diff -ruN gcc-20120223-orig/gcc/config/rs6000/rs6000.opt gcc-20120223/gcc/config/rs6000/rs6000.opt
--- gcc-20120223-orig/gcc/config/rs6000/rs6000.opt	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/rs6000.opt	2012-02-23 16:55:59.506039011 -0600
@@ -147,6 +147,10 @@
 Target Report Mask(ALTIVEC) Save
 Use AltiVec instructions
 
+maltivec2
+Target Report Mask(ALTIVEC2) Save
+Use AltiVec PowerPC V2.07 instructions
+
 mhard-dfp
 Target Report Mask(DFP) Save
 Use decimal floating point instructions
diff -ruN gcc-20120223-orig/gcc/config/rs6000/rs6000-opts.h gcc-20120223/gcc/config/rs6000/rs6000-opts.h
--- gcc-20120223-orig/gcc/config/rs6000/rs6000-opts.h	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/rs6000-opts.h	2012-02-23 16:55:59.506039011 -0600
@@ -53,6 +53,8 @@
    PROCESSOR_PPCE300C3,
    PROCESSOR_PPCE500MC,
    PROCESSOR_PPCE500MC64,
+   PROCESSOR_PPCE5500,
+   PROCESSOR_PPCE6500,
    PROCESSOR_POWER4,
    PROCESSOR_POWER5,
    PROCESSOR_POWER6,
diff -ruN gcc-20120223-orig/gcc/config.gcc gcc-20120223/gcc/config.gcc
--- gcc-20120223-orig/gcc/config.gcc	2012-02-23 16:32:15.000000000 -0600
+++ gcc-20120223/gcc/config.gcc	2012-02-23 16:55:59.506039011 -0600
@@ -402,7 +402,7 @@
 	extra_headers="ppc-asm.h altivec.h spe.h ppu_intrinsics.h paired.h spu2vmx.h vec_types.h si2vmx.h"
 	need_64bit_hwint=yes
 	case x$with_cpu in
-	    xpowerpc64|xdefault64|x6[23]0|x970|xG5|xpower[34567]|xpower6x|xrs64a|xcell|xa2|xe500mc64)
+	    xpowerpc64|xdefault64|x6[23]0|x970|xG5|xpower[34567]|xpower6x|xrs64a|xcell|xa2|xe500mc64|xe5500|Xe6500)
 		cpu_is_64bit=yes
 		;;
 	esac
@@ -3316,8 +3316,8 @@
 			| 401 | 403 | 405 | 405fp | 440 | 440fp | 464 | 464fp \
 			| 476 | 476fp | 505 | 601 | 602 | 603 | 603e | ec603e \
 			| 604 | 604e | 620 | 630 | 740 | 750 | 7400 | 7450 \
-			| a2 | e300c[23] | 854[08] | e500mc | e500mc64 | titan\
-			| 801 | 821 | 823 | 860 | 970 | G3 | G4 | G5 | cell)
+			| a2 | e300c[23] | 854[08] | e500mc | e500mc64 | e5500 | e6500 \
+			| titan | 801 | 821 | 823 | 860 | 970 | G3 | G4 | G5 | cell)
 				# OK
 				;;
 			*)
diff -ruN gcc-20120223-orig/gcc/doc/extend.texi gcc-20120223/gcc/doc/extend.texi
--- gcc-20120223-orig/gcc/doc/extend.texi	2012-02-23 16:10:14.000000000 -0600
+++ gcc-20120223/gcc/doc/extend.texi	2012-03-02 16:10:09.162038959 -0600
@@ -13373,6 +13373,291 @@
 @samp{vec_vsx_st} builtins will always generate the VSX @samp{LXVD2X},
 @samp{LXVW4X}, @samp{STXVD2X}, and @samp{STXVW4X} instructions.
 
+Using @option{-maltivec2} will extend the Altivec interface with the
+following additional functions:
+
+@smallexample
+vector unsigned char vec_absd (vector unsigned char, vector unsigned char);
+vector unsigned char vec_absd (vector bool char, vector unsigned char);
+vector unsigned char vec_absd (vector unsigned char, vector bool char);
+vector unsigned short vec_absd (vector unsigned short, vector unsigned short);
+vector unsigned short vec_absd (vector bool short, vector unsigned short);
+vector unsigned short vec_absd (vector unsigned short, vector bool short);
+vector unsigned int vec_absd (vector unsigned int, vector unsigned int);
+vector unsigned int vec_absd (vector bool int, vector unsigned int);
+vector unsigned int vec_absd (vector unsigned int, vector bool int);
+
+vector signed char vec_lvexbx (long, signed char *);
+vector unsigned char vec_lvexbx (long, unsigned char *);
+vector signed short vec_lvexhx (long, signed short *);
+vector unsigned short vec_lvexhx (long, unsigned short *);
+vector float vec_lvexwx (long, float *);
+vector signed int vec_lvexwx (long, signed int *);
+vector unsigned int vec_lvexwx (long, unsigned int *);
+vector signed int vec_lvexwx (long, signed long *);
+vector unsigned int vec_lvexwx (long, unsigned long *);
+
+void vec_stvexbx (vector signed char, long, signed char *);
+void vec_stvexbx (vector unsigned char, long, unsigned char *);
+void vec_stvexbx (vector bool char, long, signed char *);
+void vec_stvexbx (vector bool char, long, unsigned char *);
+void vec_stvexbx (vector signed char, long, void *);
+void vec_stvexbx (vector unsigned char, long, void *);
+void vec_stvexhx (vector signed short, long, signed short *);
+void vec_stvexhx (vector unsigned short, long, unsigned short *);
+void vec_stvexhx (vector bool short, long, signed short *);
+void vec_stvexhx (vector bool short, long, unsigned short *);
+void vec_stvexhx (vector signed short, long, void *);
+void vec_stvexhx (vector unsigned short, long, void *);
+void vec_stvexwx (vector float, long, float *);
+void vec_stvexwx (vector signed int, long, signed int *);
+void vec_stvexwx (vector unsigned int, long, unsigned int *);
+void vec_stvexwx (vector bool int, long, signed int *);
+void vec_stvexwx (vector bool int, long, unsigned int *);
+void vec_stvexwx (vector float, long, void *);
+void vec_stvexwx (vector signed int, long, void *);
+void vec_stvexwx (vector unsigned int, long, void *);
+
+vector float vec_lvtlx (long, vector float *);
+vector float vec_lvtlx (long, float *);
+vector bool int vec_lvtlx (long, vector bool int *);
+vector signed int vec_lvtlx (long, vector signed int *);
+vector signed int vec_lvtlx (long, signed int *);
+vector unsigned int vec_lvtlx (long, vector unsigned int *);
+vector unsigned int vec_lvtlx (long, unsigned int *);
+vector bool short vec_lvtlx (long, vector bool short *);
+vector pixel vec_lvtlx (long, vector pixel *);
+vector signed short vec_lvtlx (long, vector signed short *);
+vector signed short vec_lvtlx (long, signed short *);
+vector unsigned short vec_lvtlx (long, vector unsigned short *);
+vector unsigned short vec_lvtlx (long, unsigned short *);
+vector bool char vec_lvtlx (long, vector bool char *);
+vector signed char vec_lvtlx (long, vector signed char *);
+vector signed char vec_lvtlx (long, signed char *);
+vector unsigned char vec_lvtlx (long, vector unsigned char *);
+vector unsigned char vec_lvtlx (long, unsigned char *);
+vector float vec_lvtlxl (long, vector float *);
+vector float vec_lvtlxl (long, float *);
+vector bool int vec_lvtlxl (long, vector bool int *);
+vector signed int vec_lvtlxl (long, vector signed int *);
+vector signed int vec_lvtlxl (long, signed int *);
+vector unsigned int vec_lvtlxl (long, vector unsigned int *);
+vector unsigned int vec_lvtlxl (long, unsigned int *);
+vector bool short vec_lvtlxl (long, vector bool short *);
+vector pixel vec_lvtlxl (long, vector pixel *);
+vector signed short vec_lvtlxl (long, vector signed short *);
+vector signed short vec_lvtlxl (long, signed short *);
+vector unsigned short vec_lvtlxl (long, vector unsigned short *);
+vector unsigned short vec_lvtlxl (long, unsigned short *);
+vector bool char vec_lvtlxl (long, vector bool char *);
+vector signed char vec_lvtlxl (long, vector signed char *);
+vector signed char vec_lvtlxl (long, signed char *);
+vector unsigned char vec_lvtlxl (long, vector unsigned char *);
+vector unsigned char vec_lvtlxl (long, unsigned char *);
+vector float vec_lvtrx (long, vector float *);
+vector float vec_lvtrx (long, float *);
+vector bool int vec_lvtrx (long, vector bool int *);
+vector signed int vec_lvtrx (long, vector signed int *);
+vector signed int vec_lvtrx (long, signed int *);
+vector unsigned int vec_lvtrx (long, vector unsigned int *);
+vector unsigned int vec_lvtrx (long, unsigned int *);
+vector bool short vec_lvtrx (long, vector bool short *);
+vector pixel vec_lvtrx (long, vector pixel *);
+vector signed short vec_lvtrx (long, vector signed short *);
+vector signed short vec_lvtrx (long, signed short *);
+vector unsigned short vec_lvtrx (long, vector unsigned short *);
+vector unsigned short vec_lvtrx (long, unsigned short *);
+vector bool char vec_lvtrx (long, vector bool char *);
+vector signed char vec_lvtrx (long, vector signed char *);
+vector signed char vec_lvtrx (long, signed char *);
+vector unsigned char vec_lvtrx (long, vector unsigned char *);
+vector unsigned char vec_lvtrx (long, unsigned char *);
+vector float vec_lvtrxl (long, vector float *);
+vector float vec_lvtrxl (long, float *);
+vector bool int vec_lvtrxl (long, vector bool int *);
+vector signed int vec_lvtrxl (long, vector signed int *);
+vector signed int vec_lvtrxl (long, signed int *);
+vector unsigned int vec_lvtrxl (long, vector unsigned int *);
+vector unsigned int vec_lvtrxl (long, unsigned int *);
+vector bool short vec_lvtrxl (long, vector bool short *);
+vector pixel vec_lvtrxl (long, vector pixel *);
+vector signed short vec_lvtrxl (long, vector signed short *);
+vector signed short vec_lvtrxl (long, signed short *);
+vector unsigned short vec_lvtrxl (long, vector unsigned short *);
+vector unsigned short vec_lvtrxl (long, unsigned short *);
+vector bool char vec_lvtrxl (long, vector bool char *);
+vector signed char vec_lvtrxl (long, vector signed char *);
+vector signed char vec_lvtrxl (long, signed char *);
+vector unsigned char vec_lvtrxl (long, vector unsigned char *);
+vector unsigned char vec_lvtrxl (long, unsigned char *);
+
+void vec_stvflx (vector float, long, vector float *);
+void vec_stvflx (vector float, long, float *);
+void vec_stvflx (vector bool int, long, vector bool int *);
+void vec_stvflx (vector signed int, long, vector signed int *);
+void vec_stvflx (vector signed int, long, signed int *);
+void vec_stvflx (vector unsigned int, long, vector unsigned int *);
+void vec_stvflx (vector unsigned int, long, unsigned int *);
+void vec_stvflx (vector bool short, long, vector bool short *);
+void vec_stvflx (vector pixel, long, vector pixel *);
+void vec_stvflx (vector signed short, long, vector signed short *);
+void vec_stvflx (vector signed short, long, signed short *);
+void vec_stvflx (vector unsigned short, long, vector unsigned short *);
+void vec_stvflx (vector unsigned short, long, unsigned short *);
+void vec_stvflx (vector bool char, long, vector bool char *);
+void vec_stvflx (vector signed char, long, vector signed char *);
+void vec_stvflx (vector signed char, long, signed char *);
+void vec_stvflx (vector unsigned char, long, vector unsigned char *);
+void vec_stvflx (vector unsigned char, long, unsigned char *);
+void vec_stvflxl (vector float, long, vector float *);
+void vec_stvflxl (vector float, long, float *);
+void vec_stvflxl (vector bool int, long, vector bool int *);
+void vec_stvflxl (vector signed int, long, vector signed int *);
+void vec_stvflxl (vector signed int, long, signed int *);
+void vec_stvflxl (vector unsigned int, long, vector unsigned int *);
+void vec_stvflxl (vector unsigned int, long, unsigned int *);
+void vec_stvflxl (vector bool short, long, vector bool short *);
+void vec_stvflxl (vector pixel, long, vector pixel *);
+void vec_stvflxl (vector signed short, long, vector signed short *);
+void vec_stvflxl (vector signed short, long, signed short *);
+void vec_stvflxl (vector unsigned short, long, vector unsigned short *);
+void vec_stvflxl (vector unsigned short, long, unsigned short *);
+void vec_stvflxl (vector bool char, long, vector bool char *);
+void vec_stvflxl (vector signed char, long, vector signed char *);
+void vec_stvflxl (vector signed char, long, signed char *);
+void vec_stvflxl (vector unsigned char, long, vector unsigned char *);
+void vec_stvflxl (vector unsigned char, long, unsigned char *);
+void vec_stvfrx (vector float, long, vector float *);
+void vec_stvfrx (vector float, long, float *);
+void vec_stvfrx (vector bool int, long, vector bool int *);
+void vec_stvfrx (vector signed int, long, vector signed int *);
+void vec_stvfrx (vector signed int, long, signed int *);
+void vec_stvfrx (vector unsigned int, long, vector unsigned int *);
+void vec_stvfrx (vector unsigned int, long, unsigned int *);
+void vec_stvfrx (vector bool short, long, vector bool short *);
+void vec_stvfrx (vector pixel, long, vector pixel *);
+void vec_stvfrx (vector signed short, long, vector signed short *);
+void vec_stvfrx (vector signed short, long, signed short *);
+void vec_stvfrx (vector unsigned short, long, vector unsigned short *);
+void vec_stvfrx (vector unsigned short, long, unsigned short *);
+void vec_stvfrx (vector bool char, long, vector bool char *);
+void vec_stvfrx (vector signed char, long, vector signed char *);
+void vec_stvfrx (vector signed char, long, signed char *);
+void vec_stvfrx (vector unsigned char, long, vector unsigned char *);
+void vec_stvfrx (vector unsigned char, long, unsigned char *);
+void vec_stvfrxl (vector float, long, vector float *);
+void vec_stvfrxl (vector float, long, float *);
+void vec_stvfrxl (vector bool int, long, vector bool int *);
+void vec_stvfrxl (vector signed int, long, vector signed int *);
+void vec_stvfrxl (vector signed int, long, signed int *);
+void vec_stvfrxl (vector unsigned int, long, vector unsigned int *);
+void vec_stvfrxl (vector unsigned int, long, unsigned int *);
+void vec_stvfrxl (vector bool short, long, vector bool short *);
+void vec_stvfrxl (vector pixel, long, vector pixel *);
+void vec_stvfrxl (vector signed short, long, vector signed short *);
+void vec_stvfrxl (vector signed short, long, signed short *);
+void vec_stvfrxl (vector unsigned short, long, vector unsigned short *);
+void vec_stvfrxl (vector unsigned short, long, unsigned short *);
+void vec_stvfrxl (vector bool char, long, vector bool char *);
+void vec_stvfrxl (vector signed char, long, vector signed char *);
+void vec_stvfrxl (vector signed char, long, signed char *);
+void vec_stvfrxl (vector unsigned char, long, vector unsigned char *);
+void vec_stvfrxl (vector unsigned char, long, unsigned char *);
+
+vector float vec_lvswx (long, vector float *);
+vector float vec_lvswx (long, float *);
+vector bool int vec_lvswx (long, vector bool int *);
+vector signed int vec_lvswx (long, vector signed int *);
+vector signed int vec_lvswx (long, signed int *);
+vector unsigned int vec_lvswx (long, vector unsigned int *);
+vector unsigned int vec_lvswx (long, unsigned int *);
+vector bool short vec_lvswx (long, vector bool short *);
+vector pixel vec_lvswx (long, vector pixel *);
+vector signed short vec_lvswx (long, vector signed short *);
+vector signed short vec_lvswx (long, signed short *);
+vector unsigned short vec_lvswx (long, vector unsigned short *);
+vector unsigned short vec_lvswx (long, unsigned short *);
+vector bool char vec_lvswx (long, vector bool char *);
+vector signed char vec_lvswx (long, vector signed char *);
+vector signed char vec_lvswx (long, signed char *);
+vector unsigned char vec_lvswx (long, vector unsigned char *);
+vector unsigned char vec_lvswx (long, unsigned char *);
+vector float vec_lvswxl (long, vector float *);
+vector float vec_lvswxl (long, float *);
+vector bool int vec_lvswxl (long, vector bool int *);
+vector signed int vec_lvswxl (long, vector signed int *);
+vector signed int vec_lvswxl (long, signed int *);
+vector unsigned int vec_lvswxl (long, vector unsigned int *);
+vector unsigned int vec_lvswxl (long, unsigned int *);
+vector bool short vec_lvswxl (long, vector bool short *);
+vector pixel vec_lvswxl (long, vector pixel *);
+vector signed short vec_lvswxl (long, vector signed short *);
+vector signed short vec_lvswxl (long, signed short *);
+vector unsigned short vec_lvswxl (long, vector unsigned short *);
+vector unsigned short vec_lvswxl (long, unsigned short *);
+vector bool char vec_lvswxl (long, vector bool char *);
+vector signed char vec_lvswxl (long, vector signed char *);
+vector signed char vec_lvswxl (long, signed char *);
+vector unsigned char vec_lvswxl (long, vector unsigned char *);
+vector unsigned char vec_lvswxl (long, unsigned char *);
+
+void vec_stvswx (vector float, long, vector float *);
+void vec_stvswx (vector float, long, float  *);
+void vec_stvswx (vector bool int, long, vector bool int *);
+void vec_stvswx (vector signed int, long, vector signed int *);
+void vec_stvswx (vector signed int, long, signed int  *);
+void vec_stvswx (vector unsigned int, long, vector unsigned int *);
+void vec_stvswx (vector unsigned int, long, unsigned int  *);
+void vec_stvswx (vector bool short, long, vector bool short *);
+void vec_stvswx (vector pixel, long, vector pixel  *);
+void vec_stvswx (vector signed short, long, vector signed short *);
+void vec_stvswx (vector signed short, long, signed short  *);
+void vec_stvswx (vector unsigned short, long, vector unsigned short *);
+void vec_stvswx (vector unsigned short, long, unsigned short  *);
+void vec_stvswx (vector bool char, long, vector bool char *);
+void vec_stvswx (vector signed char, long, vector signed char *);
+void vec_stvswx (vector signed char, long, signed char  *);
+void vec_stvswx (vector unsigned char, long, vector unsigned char *);
+void vec_stvswx (vector unsigned char, long, unsigned char  *);
+void vec_stvswxl (vector float, long, vector float *);
+void vec_stvswxl (vector float, long, float  *);
+void vec_stvswxl (vector bool int, long, vector bool int *);
+void vec_stvswxl (vector signed int, long, vector signed int *);
+void vec_stvswxl (vector signed int, long, signed int  *);
+void vec_stvswxl (vector unsigned int, long, vector unsigned int *);
+void vec_stvswxl (vector unsigned int, long, unsigned int  *);
+void vec_stvswxl (vector bool short, long, vector bool short *);
+void vec_stvswxl (vector pixel, long, vector pixel  *);
+void vec_stvswxl (vector signed short, long, vector signed short *);
+void vec_stvswxl (vector signed short, long, signed short  *);
+void vec_stvswxl (vector unsigned short, long, vector unsigned short *);
+void vec_stvswxl (vector unsigned short, long, unsigned short  *);
+void vec_stvswxl (vector bool char, long, vector bool char *);
+void vec_stvswxl (vector signed char, long, vector signed char *);
+void vec_stvswxl (vector signed char, long, signed char  *);
+void vec_stvswxl (vector unsigned char, long, vector unsigned char *);
+void vec_stvswxl (vector unsigned char, long, unsigned char  *);
+
+vector float vec_lvsm (long, vector float *);
+vector float vec_lvsm (long, float *);
+vector bool int vec_lvsm (long, vector bool int *);
+vector signed int vec_lvsm (long, vector signed int *);
+vector signed int vec_lvsm (long, signed int *);
+vector unsigned int vec_lvsm (long, vector unsigned int *);
+vector unsigned int vec_lvsm (long, unsigned int *);
+vector bool short vec_lvsm (long, vector bool short *);
+vector pixel vec_lvsm (long, vector pixel *);
+vector signed short vec_lvsm (long, vector signed short *);
+vector signed short vec_lvsm (long, signed short *);
+vector unsigned short vec_lvsm (long, vector unsigned short *);
+vector unsigned short vec_lvsm (long, unsigned short *);
+vector bool char vec_lvsm (long, vector bool char *);
+vector signed char vec_lvsm (long, vector signed char *);
+vector signed char vec_lvsm (long, signed char *);
+vector unsigned char vec_lvsm (long, vector unsigned char *);
+vector unsigned char vec_lvsm (long, unsigned char *);
+@end smallexample
+
 GCC provides a few other builtins on Powerpc to access certain instructions:
 @smallexample
 float __builtin_recipdivf (float, float);
diff -ruN gcc-20120223-orig/gcc/doc/invoke.texi gcc-20120223/gcc/doc/invoke.texi
--- gcc-20120223-orig/gcc/doc/invoke.texi	2012-02-23 16:10:14.000000000 -0600
+++ gcc-20120223/gcc/doc/invoke.texi	2012-03-02 15:14:58.433038989 -0600
@@ -796,7 +796,7 @@
 -mcmodel=@var{code-model} @gol
 -mpower  -mno-power  -mpower2  -mno-power2 @gol
 -mpowerpc  -mpowerpc64  -mno-powerpc @gol
--maltivec  -mno-altivec @gol
+-maltivec  -mno-altivec -maltivec2 -mno-altivec2 @gol
 -mpowerpc-gpopt  -mno-powerpc-gpopt @gol
 -mpowerpc-gfxopt  -mno-powerpc-gfxopt @gol
 -mmfcrf  -mno-mfcrf  -mpopcntb  -mno-popcntb -mpopcntd -mno-popcntd @gol
@@ -16328,16 +16328,21 @@
 The @option{-mpopcntb} option allows GCC to generate the popcount and
 double-precision FP reciprocal estimate instruction implemented on the
 POWER5 processor and other processors that support the PowerPC V2.02
-architecture.
-The @option{-mpopcntd} option allows GCC to generate the popcount
-instruction implemented on the POWER7 processor and other processors
-that support the PowerPC V2.06 architecture.
+architecture. On the e5500 and e6500 processors, only the popcount
+instruction is generated.
+The @option{-mpopcntd} option allows GCC to generate the popcount and
+double word to FP conversion instructions implemented on the POWER7
+processor and other processors that support the PowerPC V2.06
+architecture. On the e5500 and e6500 processors, only the popcount
+instruction is generated.
 The @option{-mfprnd} option allows GCC to generate the FP round to
 integer instructions implemented on the POWER5+ processor and other
 processors that support the PowerPC V2.03 architecture.
 The @option{-mcmpb} option allows GCC to generate the compare bytes
-instruction implemented on the POWER6 processor and other processors
-that support the PowerPC V2.05 architecture.
+and copy sign instructions implemented on the POWER6 processor and
+other processors that support the PowerPC V2.05 architecture. On the
+e5500 and e6500 processors, only the compare bytes instruction is
+generated.
 The @option{-mmfpgpr} option allows GCC to generate the FP move to/from
 general-purpose register instructions implemented on the POWER6X
 processor and other processors that support the extended PowerPC V2.05
@@ -16384,11 +16389,13 @@
 @samp{603e}, @samp{604}, @samp{604e}, @samp{620}, @samp{630}, @samp{740},
 @samp{7400}, @samp{7450}, @samp{750}, @samp{801}, @samp{821}, @samp{823},
 @samp{860}, @samp{970}, @samp{8540}, @samp{a2}, @samp{e300c2},
-@samp{e300c3}, @samp{e500mc}, @samp{e500mc64}, @samp{ec603e}, @samp{G3},
-@samp{G4}, @samp{G5}, @samp{titan}, @samp{power}, @samp{power2}, @samp{power3},
-@samp{power4}, @samp{power5}, @samp{power5+}, @samp{power6}, @samp{power6x},
-@samp{power7}, @samp{common}, @samp{powerpc}, @samp{powerpc64}, @samp{rios},
-@samp{rios1}, @samp{rios2}, @samp{rsc}, and @samp{rs64}.
+@samp{e300c3}, @samp{e500mc}, @samp{e500mc64}, @samp{e5500},
+@samp{e6500}, @samp{ec603e}, @samp{G3}, @samp{G4}, @samp{G5},
+@samp{titan}, @samp{power}, @samp{power2}, @samp{power3},
+@samp{power4}, @samp{power5}, @samp{power5+}, @samp{power6},
+@samp{power6x}, @samp{power7}, @samp{common}, @samp{powerpc},
+@samp{powerpc64}, @samp{rios}, @samp{rios1}, @samp{rios2}, @samp{rsc},
+and @samp{rs64}.
 
 @option{-mcpu=common} selects a completely generic processor.  Code
 generated under this option will run on any POWER or PowerPC processor.
@@ -16409,10 +16416,11 @@
 The @option{-mcpu} options automatically enable or disable the
 following options:
 
-@gccoptlist{-maltivec  -mfprnd  -mhard-float  -mmfcrf  -mmultiple @gol
--mnew-mnemonics  -mpopcntb -mpopcntd  -mpower  -mpower2  -mpowerpc64 @gol
--mpowerpc-gpopt  -mpowerpc-gfxopt  -msingle-float -mdouble-float @gol
--msimple-fpu -mstring  -mmulhw  -mdlmzb  -mmfpgpr -mvsx}
+@gccoptlist{-maltivec -maltivec2 -mfprnd -mhard-float -mmfcrf
+-mmultiple @gol -mnew-mnemonics -mpopcntb -mpopcntd -mpower -mpower2
+-mpowerpc64 @gol -mpowerpc-gpopt -mpowerpc-gfxopt -msingle-float
+-mdouble-float @gol -msimple-fpu -mstring -mmulhw -mdlmzb -mmfpgpr
+-mvsx}
 
 The particular options set for any particular CPU will vary between
 compiler versions, depending on what setting seems to produce optimal
@@ -16463,6 +16471,16 @@
 @option{-mabi=altivec} to adjust the current ABI with AltiVec ABI
 enhancements.
 
+@item -maltivec2
+@itemx -mno-altivec2
+@opindex maltivec2
+@opindex mno-altivec2
+Generate code that uses (does not use) AltiVec2 instructions, and also
+enable the use of built-in functions that allow more direct access to
+the AltiVec2 instruction set.  You may also need to set
+@option{-mabi=altivec} to adjust the current ABI with AltiVec ABI
+enhancements.
+
 @item -mvrsave
 @itemx -mno-vrsave
 @opindex mvrsave
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.dg/tree-ssa/vector-3.c gcc-20120223/gcc/testsuite/gcc.dg/tree-ssa/vector-3.c
--- gcc-20120223-orig/gcc/testsuite/gcc.dg/tree-ssa/vector-3.c	2012-02-23 16:20:08.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.dg/tree-ssa/vector-3.c	2012-02-27 11:35:37.470038854 -0600
@@ -14,7 +14,7 @@
 
 /* We should be able to optimize this to just "return 0.0;" */
 /* { dg-final { scan-tree-dump-times "BIT_FIELD_REF" 0 "optimized"} } */
-/* { dg-final { scan-tree-dump-times "0.0" 1 "optimized"} } */
+/* { dg-final { scan-tree-dump-times "0\\\.0" 1 "optimized"} } */
 
 /* { dg-final { cleanup-tree-dump "optimized" } } */
 
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-10.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-10.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-10.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-10.c	2012-03-01 13:35:04.343039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvtlx" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc1(long a, void *p)           { return __builtin_altivec_lvtlx (a,p); }
+vsf  llx01(long a, vsf *p)          { return __builtin_vec_lvtlx (a,p); }
+vsf  llx02(long a, sf *p)           { return __builtin_vec_lvtlx (a,p); }
+vbi  llx03(long a, vbi *p)          { return __builtin_vec_lvtlx (a,p); }
+vsi  llx04(long a, vsi *p)          { return __builtin_vec_lvtlx (a,p); }
+vsi  llx05(long a, si *p)           { return __builtin_vec_lvtlx (a,p); }
+vui  llx06(long a, vui *p)          { return __builtin_vec_lvtlx (a,p); }
+vui  llx07(long a, ui *p)           { return __builtin_vec_lvtlx (a,p); }
+vbs  llx08(long a, vbs *p)          { return __builtin_vec_lvtlx (a,p); }
+vp   llx09(long a, vp *p)           { return __builtin_vec_lvtlx (a,p); }
+vss  llx10(long a, vss *p)          { return __builtin_vec_lvtlx (a,p); }
+vss  llx11(long a, ss *p)           { return __builtin_vec_lvtlx (a,p); }
+vus  llx12(long a, vus *p)          { return __builtin_vec_lvtlx (a,p); }
+vus  llx13(long a, us *p)           { return __builtin_vec_lvtlx (a,p); }
+vbc  llx14(long a, vbc *p)          { return __builtin_vec_lvtlx (a,p); }
+vsc  llx15(long a, vsc *p)          { return __builtin_vec_lvtlx (a,p); }
+vsc  llx16(long a, sc *p)           { return __builtin_vec_lvtlx (a,p); }
+vuc  llx17(long a, vuc *p)          { return __builtin_vec_lvtlx (a,p); }
+vuc  llx18(long a, uc *p)           { return __builtin_vec_lvtlx (a,p); }
+vsf  Dllx01(long a, vsf *p)         { return vec_lvtlx (a,p); }
+vsf  Dllx02(long a, sf *p)          { return vec_lvtlx (a,p); }
+vbi  Dllx03(long a, vbi *p)         { return vec_lvtlx (a,p); }
+vsi  Dllx04(long a, vsi *p)         { return vec_lvtlx (a,p); }
+vsi  Dllx05(long a, si *p)          { return vec_lvtlx (a,p); }
+vui  Dllx06(long a, vui *p)         { return vec_lvtlx (a,p); }
+vui  Dllx07(long a, ui *p)          { return vec_lvtlx (a,p); }
+vbs  Dllx08(long a, vbs *p)         { return vec_lvtlx (a,p); }
+vp   Dllx09(long a, vp *p)          { return vec_lvtlx (a,p); }
+vss  Dllx10(long a, vss *p)         { return vec_lvtlx (a,p); }
+vss  Dllx11(long a, ss *p)          { return vec_lvtlx (a,p); }
+vus  Dllx12(long a, vus *p)         { return vec_lvtlx (a,p); }
+vus  Dllx13(long a, us *p)          { return vec_lvtlx (a,p); }
+vbc  Dllx14(long a, vbc *p)         { return vec_lvtlx (a,p); }
+vsc  Dllx15(long a, vsc *p)         { return vec_lvtlx (a,p); }
+vsc  Dllx16(long a, sc *p)          { return vec_lvtlx (a,p); }
+vuc  Dllx17(long a, vuc *p)         { return vec_lvtlx (a,p); }
+vuc  Dllx18(long a, uc *p)          { return vec_lvtlx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-11.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-11.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-11.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-11.c	2012-03-01 13:35:04.373039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvtlxl" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc2(long a, void *p)           { return __builtin_altivec_lvtlxl (a,p); }
+vsf  llxl01(long a, vsf *p)         { return __builtin_vec_lvtlxl (a,p); }
+vsf  llxl02(long a, sf *p)          { return __builtin_vec_lvtlxl (a,p); }
+vbi  llxl03(long a, vbi *p)         { return __builtin_vec_lvtlxl (a,p); }
+vsi  llxl04(long a, vsi *p)         { return __builtin_vec_lvtlxl (a,p); }
+vsi  llxl05(long a, si *p)          { return __builtin_vec_lvtlxl (a,p); }
+vui  llxl06(long a, vui *p)         { return __builtin_vec_lvtlxl (a,p); }
+vui  llxl07(long a, ui *p)          { return __builtin_vec_lvtlxl (a,p); }
+vbs  llxl08(long a, vbs *p)         { return __builtin_vec_lvtlxl (a,p); }
+vp   llxl09(long a, vp *p)          { return __builtin_vec_lvtlxl (a,p); }
+vss  llxl10(long a, vss *p)         { return __builtin_vec_lvtlxl (a,p); }
+vss  llxl11(long a, ss *p)          { return __builtin_vec_lvtlxl (a,p); }
+vus  llxl12(long a, vus *p)         { return __builtin_vec_lvtlxl (a,p); }
+vus  llxl13(long a, us *p)          { return __builtin_vec_lvtlxl (a,p); }
+vbc  llxl14(long a, vbc *p)         { return __builtin_vec_lvtlxl (a,p); }
+vsc  llxl15(long a, vsc *p)         { return __builtin_vec_lvtlxl (a,p); }
+vsc  llxl16(long a, sc *p)          { return __builtin_vec_lvtlxl (a,p); }
+vuc  llxl17(long a, vuc *p)         { return __builtin_vec_lvtlxl (a,p); }
+vuc  llxl18(long a, uc *p)          { return __builtin_vec_lvtlxl (a,p); }
+vsf  Dllxl01(long a, vsf *p)        { return vec_lvtlxl (a,p); }
+vsf  Dllxl02(long a, sf *p)         { return vec_lvtlxl (a,p); }
+vbi  Dllxl03(long a, vbi *p)        { return vec_lvtlxl (a,p); }
+vsi  Dllxl04(long a, vsi *p)        { return vec_lvtlxl (a,p); }
+vsi  Dllxl05(long a, si *p)         { return vec_lvtlxl (a,p); }
+vui  Dllxl06(long a, vui *p)        { return vec_lvtlxl (a,p); }
+vui  Dllxl07(long a, ui *p)         { return vec_lvtlxl (a,p); }
+vbs  Dllxl08(long a, vbs *p)        { return vec_lvtlxl (a,p); }
+vp   Dllxl09(long a, vp *p)         { return vec_lvtlxl (a,p); }
+vss  Dllxl10(long a, vss *p)        { return vec_lvtlxl (a,p); }
+vss  Dllxl11(long a, ss *p)         { return vec_lvtlxl (a,p); }
+vus  Dllxl12(long a, vus *p)        { return vec_lvtlxl (a,p); }
+vus  Dllxl13(long a, us *p)         { return vec_lvtlxl (a,p); }
+vbc  Dllxl14(long a, vbc *p)        { return vec_lvtlxl (a,p); }
+vsc  Dllxl15(long a, vsc *p)        { return vec_lvtlxl (a,p); }
+vsc  Dllxl16(long a, sc *p)         { return vec_lvtlxl (a,p); }
+vuc  Dllxl17(long a, vuc *p)        { return vec_lvtlxl (a,p); }
+vuc  Dllxl18(long a, uc *p)         { return vec_lvtlxl (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-12.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-12.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-12.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-12.c	2012-03-01 13:35:04.404039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvtrx" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc3(long a, void *p)           { return __builtin_altivec_lvtrx (a,p); }
+vsf  lrx01(long a, vsf *p)          { return __builtin_vec_lvtrx (a,p); }
+vsf  lrx02(long a, sf *p)           { return __builtin_vec_lvtrx (a,p); }
+vbi  lrx03(long a, vbi *p)          { return __builtin_vec_lvtrx (a,p); }
+vsi  lrx04(long a, vsi *p)          { return __builtin_vec_lvtrx (a,p); }
+vsi  lrx05(long a, si *p)           { return __builtin_vec_lvtrx (a,p); }
+vui  lrx06(long a, vui *p)          { return __builtin_vec_lvtrx (a,p); }
+vui  lrx07(long a, ui *p)           { return __builtin_vec_lvtrx (a,p); }
+vbs  lrx08(long a, vbs *p)          { return __builtin_vec_lvtrx (a,p); }
+vp   lrx09(long a, vp *p)           { return __builtin_vec_lvtrx (a,p); }
+vss  lrx10(long a, vss *p)          { return __builtin_vec_lvtrx (a,p); }
+vss  lrx11(long a, ss *p)           { return __builtin_vec_lvtrx (a,p); }
+vus  lrx12(long a, vus *p)          { return __builtin_vec_lvtrx (a,p); }
+vus  lrx13(long a, us *p)           { return __builtin_vec_lvtrx (a,p); }
+vbc  lrx14(long a, vbc *p)          { return __builtin_vec_lvtrx (a,p); }
+vsc  lrx15(long a, vsc *p)          { return __builtin_vec_lvtrx (a,p); }
+vsc  lrx16(long a, sc *p)           { return __builtin_vec_lvtrx (a,p); }
+vuc  lrx17(long a, vuc *p)          { return __builtin_vec_lvtrx (a,p); }
+vuc  lrx18(long a, uc *p)           { return __builtin_vec_lvtrx (a,p); }
+vsf  Dlrx01(long a, vsf *p)         { return vec_lvtrx (a,p); }
+vsf  Dlrx02(long a, sf *p)          { return vec_lvtrx (a,p); }
+vbi  Dlrx03(long a, vbi *p)         { return vec_lvtrx (a,p); }
+vsi  Dlrx04(long a, vsi *p)         { return vec_lvtrx (a,p); }
+vsi  Dlrx05(long a, si *p)          { return vec_lvtrx (a,p); }
+vui  Dlrx06(long a, vui *p)         { return vec_lvtrx (a,p); }
+vui  Dlrx07(long a, ui *p)          { return vec_lvtrx (a,p); }
+vbs  Dlrx08(long a, vbs *p)         { return vec_lvtrx (a,p); }
+vp   Dlrx09(long a, vp *p)          { return vec_lvtrx (a,p); }
+vss  Dlrx10(long a, vss *p)         { return vec_lvtrx (a,p); }
+vss  Dlrx11(long a, ss *p)          { return vec_lvtrx (a,p); }
+vus  Dlrx12(long a, vus *p)         { return vec_lvtrx (a,p); }
+vus  Dlrx13(long a, us *p)          { return vec_lvtrx (a,p); }
+vbc  Dlrx14(long a, vbc *p)         { return vec_lvtrx (a,p); }
+vsc  Dlrx15(long a, vsc *p)         { return vec_lvtrx (a,p); }
+vsc  Dlrx16(long a, sc *p)          { return vec_lvtrx (a,p); }
+vuc  Dlrx17(long a, vuc *p)         { return vec_lvtrx (a,p); }
+vuc  Dlrx18(long a, uc *p)          { return vec_lvtrx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-13.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-13.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-13.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-13.c	2012-03-01 13:35:04.433039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvtrxl" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc4(long a, void *p)           { return __builtin_altivec_lvtrxl (a,p); }
+vsf  lrxl01(long a, vsf *p)         { return __builtin_vec_lvtrxl (a,p); }
+vsf  lrxl02(long a, sf *p)          { return __builtin_vec_lvtrxl (a,p); }
+vbi  lrxl03(long a, vbi *p)         { return __builtin_vec_lvtrxl (a,p); }
+vsi  lrxl04(long a, vsi *p)         { return __builtin_vec_lvtrxl (a,p); }
+vsi  lrxl05(long a, si *p)          { return __builtin_vec_lvtrxl (a,p); }
+vui  lrxl06(long a, vui *p)         { return __builtin_vec_lvtrxl (a,p); }
+vui  lrxl07(long a, ui *p)          { return __builtin_vec_lvtrxl (a,p); }
+vbs  lrxl08(long a, vbs *p)         { return __builtin_vec_lvtrxl (a,p); }
+vp   lrxl09(long a, vp *p)          { return __builtin_vec_lvtrxl (a,p); }
+vss  lrxl10(long a, vss *p)         { return __builtin_vec_lvtrxl (a,p); }
+vss  lrxl11(long a, ss *p)          { return __builtin_vec_lvtrxl (a,p); }
+vus  lrxl12(long a, vus *p)         { return __builtin_vec_lvtrxl (a,p); }
+vus  lrxl13(long a, us *p)          { return __builtin_vec_lvtrxl (a,p); }
+vbc  lrxl14(long a, vbc *p)         { return __builtin_vec_lvtrxl (a,p); }
+vsc  lrxl15(long a, vsc *p)         { return __builtin_vec_lvtrxl (a,p); }
+vsc  lrxl16(long a, sc *p)          { return __builtin_vec_lvtrxl (a,p); }
+vuc  lrxl17(long a, vuc *p)         { return __builtin_vec_lvtrxl (a,p); }
+vuc  lrxl18(long a, uc *p)          { return __builtin_vec_lvtrxl (a,p); }
+vsf  Dlrxl01(long a, vsf *p)        { return vec_lvtrxl (a,p); }
+vsf  Dlrxl02(long a, sf *p)         { return vec_lvtrxl (a,p); }
+vbi  Dlrxl03(long a, vbi *p)        { return vec_lvtrxl (a,p); }
+vsi  Dlrxl04(long a, vsi *p)        { return vec_lvtrxl (a,p); }
+vsi  Dlrxl05(long a, si *p)         { return vec_lvtrxl (a,p); }
+vui  Dlrxl06(long a, vui *p)        { return vec_lvtrxl (a,p); }
+vui  Dlrxl07(long a, ui *p)         { return vec_lvtrxl (a,p); }
+vbs  Dlrxl08(long a, vbs *p)        { return vec_lvtrxl (a,p); }
+vp   Dlrxl09(long a, vp *p)         { return vec_lvtrxl (a,p); }
+vss  Dlrxl10(long a, vss *p)        { return vec_lvtrxl (a,p); }
+vss  Dlrxl11(long a, ss *p)         { return vec_lvtrxl (a,p); }
+vus  Dlrxl12(long a, vus *p)        { return vec_lvtrxl (a,p); }
+vus  Dlrxl13(long a, us *p)         { return vec_lvtrxl (a,p); }
+vbc  Dlrxl14(long a, vbc *p)        { return vec_lvtrxl (a,p); }
+vsc  Dlrxl15(long a, vsc *p)        { return vec_lvtrxl (a,p); }
+vsc  Dlrxl16(long a, sc *p)         { return vec_lvtrxl (a,p); }
+vuc  Dlrxl17(long a, vuc *p)        { return vec_lvtrxl (a,p); }
+vuc  Dlrxl18(long a, uc *p)         { return vec_lvtrxl (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-14.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-14.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-14.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-14.c	2012-03-01 13:35:04.465039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvflx" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc1(vsc v, long a, void *p)    { __builtin_altivec_stvflx (v,a,p); }
+void slx01(vsf v, long a, vsf *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx02(vsf v, long a, sf *p)    { __builtin_vec_stvflx (v,a,p); }
+void slx03(vbi v, long a, vbi *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx04(vsi v, long a, vsi *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx05(vsi v, long a, si *p)    { __builtin_vec_stvflx (v,a,p); }
+void slx06(vui v, long a, vui *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx07(vui v, long a, ui *p)    { __builtin_vec_stvflx (v,a,p); }
+void slx08(vbs v, long a, vbs *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx09(vp v, long a, vp *p)     { __builtin_vec_stvflx (v,a,p); }
+void slx10(vss v, long a, vss *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx11(vss v, long a, ss *p)    { __builtin_vec_stvflx (v,a,p); }
+void slx12(vus v, long a, vus *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx13(vus v, long a, us *p)    { __builtin_vec_stvflx (v,a,p); }
+void slx14(vbc v, long a, vbc *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx15(vsc v, long a, vsc *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx16(vsc v, long a, sc *p)    { __builtin_vec_stvflx (v,a,p); }
+void slx17(vuc v, long a, vuc *p)   { __builtin_vec_stvflx (v,a,p); }
+void slx18(vuc v, long a, uc *p)    { __builtin_vec_stvflx (v,a,p); }
+void Dslx01(vsf v, long a, vsf *p)  { vec_stvflx (v,a,p); }
+void Dslx02(vsf v, long a, sf *p)   { vec_stvflx (v,a,p); }
+void Dslx03(vbi v, long a, vbi *p)  { vec_stvflx (v,a,p); }
+void Dslx04(vsi v, long a, vsi *p)  { vec_stvflx (v,a,p); }
+void Dslx05(vsi v, long a, si *p)   { vec_stvflx (v,a,p); }
+void Dslx06(vui v, long a, vui *p)  { vec_stvflx (v,a,p); }
+void Dslx07(vui v, long a, ui *p)   { vec_stvflx (v,a,p); }
+void Dslx08(vbs v, long a, vbs *p)  { vec_stvflx (v,a,p); }
+void Dslx09(vp v, long a, vp *p)    { vec_stvflx (v,a,p); }
+void Dslx10(vss v, long a, vss *p)  { vec_stvflx (v,a,p); }
+void Dslx11(vss v, long a, ss *p)   { vec_stvflx (v,a,p); }
+void Dslx12(vus v, long a, vus *p)  { vec_stvflx (v,a,p); }
+void Dslx13(vus v, long a, us *p)   { vec_stvflx (v,a,p); }
+void Dslx14(vbc v, long a, vbc *p)  { vec_stvflx (v,a,p); }
+void Dslx15(vsc v, long a, vsc *p)  { vec_stvflx (v,a,p); }
+void Dslx16(vsc v, long a, sc *p)   { vec_stvflx (v,a,p); }
+void Dslx17(vuc v, long a, vuc *p)  { vec_stvflx (v,a,p); }
+void Dslx18(vuc v, long a, uc *p)   { vec_stvflx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-15.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-15.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-15.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-15.c	2012-03-01 13:35:04.495039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvflxl" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc2(vsc v, long a, void *p)    { __builtin_altivec_stvflxl (v,a,p); }
+void slxl01(vsf v, long a, vsf *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl02(vsf v, long a, sf *p)   { __builtin_vec_stvflxl (v,a,p); }
+void slxl03(vbi v, long a, vbi *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl04(vsi v, long a, vsi *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl05(vsi v, long a, si *p)   { __builtin_vec_stvflxl (v,a,p); }
+void slxl06(vui v, long a, vui *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl07(vui v, long a, ui *p)   { __builtin_vec_stvflxl (v,a,p); }
+void slxl08(vbs v, long a, vbs *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl09(vp v, long a, vp *p)    { __builtin_vec_stvflxl (v,a,p); }
+void slxl10(vss v, long a, vss *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl11(vss v, long a, ss *p)   { __builtin_vec_stvflxl (v,a,p); }
+void slxl12(vus v, long a, vus *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl13(vus v, long a, us *p)   { __builtin_vec_stvflxl (v,a,p); }
+void slxl14(vbc v, long a, vbc *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl15(vsc v, long a, vsc *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl16(vsc v, long a, sc *p)   { __builtin_vec_stvflxl (v,a,p); }
+void slxl17(vuc v, long a, vuc *p)  { __builtin_vec_stvflxl (v,a,p); }
+void slxl18(vuc v, long a, uc *p)   { __builtin_vec_stvflxl (v,a,p); }
+void Dslxl01(vsf v, long a, vsf *p) { vec_stvflxl (v,a,p); }
+void Dslxl02(vsf v, long a, sf *p)  { vec_stvflxl (v,a,p); }
+void Dslxl03(vbi v, long a, vbi *p) { vec_stvflxl (v,a,p); }
+void Dslxl04(vsi v, long a, vsi *p) { vec_stvflxl (v,a,p); }
+void Dslxl05(vsi v, long a, si *p)  { vec_stvflxl (v,a,p); }
+void Dslxl06(vui v, long a, vui *p) { vec_stvflxl (v,a,p); }
+void Dslxl07(vui v, long a, ui *p)  { vec_stvflxl (v,a,p); }
+void Dslxl08(vbs v, long a, vbs *p) { vec_stvflxl (v,a,p); }
+void Dslxl09(vp v, long a, vp *p)   { vec_stvflxl (v,a,p); }
+void Dslxl10(vss v, long a, vss *p) { vec_stvflxl (v,a,p); }
+void Dslxl11(vss v, long a, ss *p)  { vec_stvflxl (v,a,p); }
+void Dslxl12(vus v, long a, vus *p) { vec_stvflxl (v,a,p); }
+void Dslxl13(vus v, long a, us *p)  { vec_stvflxl (v,a,p); }
+void Dslxl14(vbc v, long a, vbc *p) { vec_stvflxl (v,a,p); }
+void Dslxl15(vsc v, long a, vsc *p) { vec_stvflxl (v,a,p); }
+void Dslxl16(vsc v, long a, sc *p)  { vec_stvflxl (v,a,p); }
+void Dslxl17(vuc v, long a, vuc *p) { vec_stvflxl (v,a,p); }
+void Dslxl18(vuc v, long a, uc *p)  { vec_stvflxl (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-16.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-16.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-16.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-16.c	2012-03-01 13:35:04.525039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvfrx" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc3(vsc v, long a, void *p)    { __builtin_altivec_stvfrx (v,a,p); }
+void srx01(vsf v, long a, vsf *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx02(vsf v, long a, sf *p)    { __builtin_vec_stvfrx (v,a,p); }
+void srx03(vbi v, long a, vbi *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx04(vsi v, long a, vsi *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx05(vsi v, long a, si *p)    { __builtin_vec_stvfrx (v,a,p); }
+void srx06(vui v, long a, vui *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx07(vui v, long a, ui *p)    { __builtin_vec_stvfrx (v,a,p); }
+void srx08(vbs v, long a, vbs *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx09(vp v, long a, vp *p)     { __builtin_vec_stvfrx (v,a,p); }
+void srx10(vss v, long a, vss *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx11(vss v, long a, ss *p)    { __builtin_vec_stvfrx (v,a,p); }
+void srx12(vus v, long a, vus *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx13(vus v, long a, us *p)    { __builtin_vec_stvfrx (v,a,p); }
+void srx14(vbc v, long a, vbc *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx15(vsc v, long a, vsc *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx16(vsc v, long a, sc *p)    { __builtin_vec_stvfrx (v,a,p); }
+void srx17(vuc v, long a, vuc *p)   { __builtin_vec_stvfrx (v,a,p); }
+void srx18(vuc v, long a, uc *p)    { __builtin_vec_stvfrx (v,a,p); }
+void Dsrx01(vsf v, long a, vsf *p)  { vec_stvfrx (v,a,p); }
+void Dsrx02(vsf v, long a, sf *p)   { vec_stvfrx (v,a,p); }
+void Dsrx03(vbi v, long a, vbi *p)  { vec_stvfrx (v,a,p); }
+void Dsrx04(vsi v, long a, vsi *p)  { vec_stvfrx (v,a,p); }
+void Dsrx05(vsi v, long a, si *p)   { vec_stvfrx (v,a,p); }
+void Dsrx06(vui v, long a, vui *p)  { vec_stvfrx (v,a,p); }
+void Dsrx07(vui v, long a, ui *p)   { vec_stvfrx (v,a,p); }
+void Dsrx08(vbs v, long a, vbs *p)  { vec_stvfrx (v,a,p); }
+void Dsrx09(vp v, long a, vp *p)    { vec_stvfrx (v,a,p); }
+void Dsrx10(vss v, long a, vss *p)  { vec_stvfrx (v,a,p); }
+void Dsrx11(vss v, long a, ss *p)   { vec_stvfrx (v,a,p); }
+void Dsrx12(vus v, long a, vus *p)  { vec_stvfrx (v,a,p); }
+void Dsrx13(vus v, long a, us *p)   { vec_stvfrx (v,a,p); }
+void Dsrx14(vbc v, long a, vbc *p)  { vec_stvfrx (v,a,p); }
+void Dsrx15(vsc v, long a, vsc *p)  { vec_stvfrx (v,a,p); }
+void Dsrx16(vsc v, long a, sc *p)   { vec_stvfrx (v,a,p); }
+void Dsrx17(vuc v, long a, vuc *p)  { vec_stvfrx (v,a,p); }
+void Dsrx18(vuc v, long a, uc *p)   { vec_stvfrx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-17.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-17.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-17.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-17.c	2012-03-01 13:35:04.556039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvfrxl" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc4(vsc v, long a, void *p)    { __builtin_altivec_stvfrxl (v,a,p); }
+void srxl01(vsf v, long a, vsf *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl02(vsf v, long a, sf *p)   { __builtin_vec_stvfrxl (v,a,p); }
+void srxl03(vbi v, long a, vbi *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl04(vsi v, long a, vsi *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl05(vsi v, long a, si *p)   { __builtin_vec_stvfrxl (v,a,p); }
+void srxl06(vui v, long a, vui *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl07(vui v, long a, ui *p)   { __builtin_vec_stvfrxl (v,a,p); }
+void srxl08(vbs v, long a, vbs *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl09(vp v, long a, vp *p)    { __builtin_vec_stvfrxl (v,a,p); }
+void srxl10(vss v, long a, vss *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl11(vss v, long a, ss *p)   { __builtin_vec_stvfrxl (v,a,p); }
+void srxl12(vus v, long a, vus *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl13(vus v, long a, us *p)   { __builtin_vec_stvfrxl (v,a,p); }
+void srxl14(vbc v, long a, vbc *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl15(vsc v, long a, vsc *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl16(vsc v, long a, sc *p)   { __builtin_vec_stvfrxl (v,a,p); }
+void srxl17(vuc v, long a, vuc *p)  { __builtin_vec_stvfrxl (v,a,p); }
+void srxl18(vuc v, long a, uc *p)   { __builtin_vec_stvfrxl (v,a,p); }
+void Dsrxl01(vsf v, long a, vsf *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl02(vsf v, long a, sf *p)  { vec_stvfrxl (v,a,p); }
+void Dsrxl03(vbi v, long a, vbi *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl04(vsi v, long a, vsi *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl05(vsi v, long a, si *p)  { vec_stvfrxl (v,a,p); }
+void Dsrxl06(vui v, long a, vui *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl07(vui v, long a, ui *p)  { vec_stvfrxl (v,a,p); }
+void Dsrxl08(vbs v, long a, vbs *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl09(vp v, long a, vp *p)   { vec_stvfrxl (v,a,p); }
+void Dsrxl10(vss v, long a, vss *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl11(vss v, long a, ss *p)  { vec_stvfrxl (v,a,p); }
+void Dsrxl12(vus v, long a, vus *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl13(vus v, long a, us *p)  { vec_stvfrxl (v,a,p); }
+void Dsrxl14(vbc v, long a, vbc *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl15(vsc v, long a, vsc *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl16(vsc v, long a, sc *p)  { vec_stvfrxl (v,a,p); }
+void Dsrxl17(vuc v, long a, vuc *p) { vec_stvfrxl (v,a,p); }
+void Dsrxl18(vuc v, long a, uc *p)  { vec_stvfrxl (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-18.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-18.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-18.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-18.c	2012-03-01 13:35:04.586039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvswx" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  ls1(long a, void *p)           { return __builtin_altivec_lvswx (a,p); }
+vsf  ls01(long a, vsf *p)           { return __builtin_vec_lvswx (a,p); }
+vsf  ls02(long a, sf *p)            { return __builtin_vec_lvswx (a,p); }
+vbi  ls03(long a, vbi *p)           { return __builtin_vec_lvswx (a,p); }
+vsi  ls04(long a, vsi *p)           { return __builtin_vec_lvswx (a,p); }
+vsi  ls05(long a, si *p)            { return __builtin_vec_lvswx (a,p); }
+vui  ls06(long a, vui *p)           { return __builtin_vec_lvswx (a,p); }
+vui  ls07(long a, ui *p)            { return __builtin_vec_lvswx (a,p); }
+vbs  ls08(long a, vbs *p)           { return __builtin_vec_lvswx (a,p); }
+vp   ls09(long a, vp *p)            { return __builtin_vec_lvswx (a,p); }
+vss  ls10(long a, vss *p)           { return __builtin_vec_lvswx (a,p); }
+vss  ls11(long a, ss *p)            { return __builtin_vec_lvswx (a,p); }
+vus  ls12(long a, vus *p)           { return __builtin_vec_lvswx (a,p); }
+vus  ls13(long a, us *p)            { return __builtin_vec_lvswx (a,p); }
+vbc  ls14(long a, vbc *p)           { return __builtin_vec_lvswx (a,p); }
+vsc  ls15(long a, vsc *p)           { return __builtin_vec_lvswx (a,p); }
+vsc  ls16(long a, sc *p)            { return __builtin_vec_lvswx (a,p); }
+vuc  ls17(long a, vuc *p)           { return __builtin_vec_lvswx (a,p); }
+vuc  ls18(long a, uc *p)            { return __builtin_vec_lvswx (a,p); }
+vsf  Dls01(long a, vsf *p)          { return vec_lvswx (a,p); }
+vsf  Dls02(long a, sf *p)           { return vec_lvswx (a,p); }
+vbi  Dls03(long a, vbi *p)          { return vec_lvswx (a,p); }
+vsi  Dls04(long a, vsi *p)          { return vec_lvswx (a,p); }
+vsi  Dls05(long a, si *p)           { return vec_lvswx (a,p); }
+vui  Dls06(long a, vui *p)          { return vec_lvswx (a,p); }
+vui  Dls07(long a, ui *p)           { return vec_lvswx (a,p); }
+vbs  Dls08(long a, vbs *p)          { return vec_lvswx (a,p); }
+vp   Dls09(long a, vp *p)           { return vec_lvswx (a,p); }
+vss  Dls10(long a, vss *p)          { return vec_lvswx (a,p); }
+vss  Dls11(long a, ss *p)           { return vec_lvswx (a,p); }
+vus  Dls12(long a, vus *p)          { return vec_lvswx (a,p); }
+vus  Dls13(long a, us *p)           { return vec_lvswx (a,p); }
+vbc  Dls14(long a, vbc *p)          { return vec_lvswx (a,p); }
+vsc  Dls15(long a, vsc *p)          { return vec_lvswx (a,p); }
+vsc  Dls16(long a, sc *p)           { return vec_lvswx (a,p); }
+vuc  Dls17(long a, vuc *p)          { return vec_lvswx (a,p); }
+vuc  Dls18(long a, uc *p)           { return vec_lvswx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-19.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-19.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-19.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-19.c	2012-03-01 13:35:04.617039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvswxl" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  ls2l(long a, void *p)          { return __builtin_altivec_lvswxl (a,p); }
+vsf  lsl01(long a, vsf *p)          { return __builtin_vec_lvswxl (a,p); }
+vsf  lsl02(long a, sf *p)           { return __builtin_vec_lvswxl (a,p); }
+vbi  lsl03(long a, vbi *p)          { return __builtin_vec_lvswxl (a,p); }
+vsi  lsl04(long a, vsi *p)          { return __builtin_vec_lvswxl (a,p); }
+vsi  lsl05(long a, si *p)           { return __builtin_vec_lvswxl (a,p); }
+vui  lsl06(long a, vui *p)          { return __builtin_vec_lvswxl (a,p); }
+vui  lsl07(long a, ui *p)           { return __builtin_vec_lvswxl (a,p); }
+vbs  lsl08(long a, vbs *p)          { return __builtin_vec_lvswxl (a,p); }
+vp   lsl09(long a, vp *p)           { return __builtin_vec_lvswxl (a,p); }
+vss  lsl10(long a, vss *p)          { return __builtin_vec_lvswxl (a,p); }
+vss  lsl11(long a, ss *p)           { return __builtin_vec_lvswxl (a,p); }
+vus  lsl12(long a, vus *p)          { return __builtin_vec_lvswxl (a,p); }
+vus  lsl13(long a, us *p)           { return __builtin_vec_lvswxl (a,p); }
+vbc  lsl14(long a, vbc *p)          { return __builtin_vec_lvswxl (a,p); }
+vsc  lsl15(long a, vsc *p)          { return __builtin_vec_lvswxl (a,p); }
+vsc  lsl16(long a, sc *p)           { return __builtin_vec_lvswxl (a,p); }
+vuc  lsl17(long a, vuc *p)          { return __builtin_vec_lvswxl (a,p); }
+vuc  lsl18(long a, uc *p)           { return __builtin_vec_lvswxl (a,p); }
+vsf  Dlsl01(long a, vsf *p)         { return vec_lvswxl (a,p); }
+vsf  Dlsl02(long a, sf *p)          { return vec_lvswxl (a,p); }
+vbi  Dlsl03(long a, vbi *p)         { return vec_lvswxl (a,p); }
+vsi  Dlsl04(long a, vsi *p)         { return vec_lvswxl (a,p); }
+vsi  Dlsl05(long a, si *p)          { return vec_lvswxl (a,p); }
+vui  Dlsl06(long a, vui *p)         { return vec_lvswxl (a,p); }
+vui  Dlsl07(long a, ui *p)          { return vec_lvswxl (a,p); }
+vbs  Dlsl08(long a, vbs *p)         { return vec_lvswxl (a,p); }
+vp   Dlsl09(long a, vp *p)          { return vec_lvswxl (a,p); }
+vss  Dlsl10(long a, vss *p)         { return vec_lvswxl (a,p); }
+vss  Dlsl11(long a, ss *p)          { return vec_lvswxl (a,p); }
+vus  Dlsl12(long a, vus *p)         { return vec_lvswxl (a,p); }
+vus  Dlsl13(long a, us *p)          { return vec_lvswxl (a,p); }
+vbc  Dlsl14(long a, vbc *p)         { return vec_lvswxl (a,p); }
+vsc  Dlsl15(long a, vsc *p)         { return vec_lvswxl (a,p); }
+vsc  Dlsl16(long a, sc *p)          { return vec_lvswxl (a,p); }
+vuc  Dlsl17(long a, vuc *p)         { return vec_lvswxl (a,p); }
+vuc  Dlsl18(long a, uc *p)          { return vec_lvswxl (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-1.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-1.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-1.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-1.c	2012-03-01 13:35:04.647039002 -0600
@@ -0,0 +1,36 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "vabsdub" 7 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vuc  fa1b(vuc a, vuc b)             { return __builtin_altivec_vabsdub (a,b); }
+vuc  ad1(vuc a, vuc b)              { return __builtin_vec_absd (a,b); }
+vuc  ad2(vbc a, vuc b)              { return __builtin_vec_absd (a,b); }
+vuc  ad3(vuc a, vbc b)              { return __builtin_vec_absd (a,b); }
+vuc  Dad1(vuc a, vuc b)             { return vec_absd (a,b); }
+vuc  Dad2(vbc a, vuc b)             { return vec_absd (a,b); }
+vuc  Dad3(vuc a, vbc b)             { return vec_absd (a,b); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-20.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-20.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-20.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-20.c	2012-03-01 13:35:04.678039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvswx" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void ss1(vsc v, long a, vsc *p)     { __builtin_altivec_stvswx (v,a,p); }
+void ssx01(vsf v, long a, vsf *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx02(vsf v, long a, sf  *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx03(vbi v, long a, vbi *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx04(vsi v, long a, vsi *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx05(vsi v, long a, si  *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx06(vui v, long a, vui *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx07(vui v, long a, ui  *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx08(vbs v, long a, vbs *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx09(vp  v, long a, vp  *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx10(vss v, long a, vss *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx11(vss v, long a, ss  *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx12(vus v, long a, vus *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx13(vus v, long a, us  *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx14(vbc v, long a, vbc *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx15(vsc v, long a, vsc *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx16(vsc v, long a, sc  *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx17(vuc v, long a, vuc *p)   { __builtin_vec_stvswx (v,a,p); }
+void ssx18(vuc v, long a, uc  *p)   { __builtin_vec_stvswx (v,a,p); }
+void Dssx01(vsf v, long a, vsf *p)  { vec_stvswx (v,a,p); }
+void Dssx02(vsf v, long a, sf  *p)  { vec_stvswx (v,a,p); }
+void Dssx03(vbi v, long a, vbi *p)  { vec_stvswx (v,a,p); }
+void Dssx04(vsi v, long a, vsi *p)  { vec_stvswx (v,a,p); }
+void Dssx05(vsi v, long a, si  *p)  { vec_stvswx (v,a,p); }
+void Dssx06(vui v, long a, vui *p)  { vec_stvswx (v,a,p); }
+void Dssx07(vui v, long a, ui  *p)  { vec_stvswx (v,a,p); }
+void Dssx08(vbs v, long a, vbs *p)  { vec_stvswx (v,a,p); }
+void Dssx09(vp  v, long a, vp  *p)  { vec_stvswx (v,a,p); }
+void Dssx10(vss v, long a, vss *p)  { vec_stvswx (v,a,p); }
+void Dssx11(vss v, long a, ss  *p)  { vec_stvswx (v,a,p); }
+void Dssx12(vus v, long a, vus *p)  { vec_stvswx (v,a,p); }
+void Dssx13(vus v, long a, us  *p)  { vec_stvswx (v,a,p); }
+void Dssx14(vbc v, long a, vbc *p)  { vec_stvswx (v,a,p); }
+void Dssx15(vsc v, long a, vsc *p)  { vec_stvswx (v,a,p); }
+void Dssx16(vsc v, long a, sc  *p)  { vec_stvswx (v,a,p); }
+void Dssx17(vuc v, long a, vuc *p)  { vec_stvswx (v,a,p); }
+void Dssx18(vuc v, long a, uc  *p)  { vec_stvswx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-21.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-21.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-21.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-21.c	2012-03-01 13:35:04.709039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvswxl" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void ss2l(vsc v, long a, vsc *p)    { __builtin_altivec_stvswxl (v,a,p); }
+void ssxl01(vsf v, long a, vsf *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl02(vsf v, long a, sf  *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl03(vbi v, long a, vbi *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl04(vsi v, long a, vsi *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl05(vsi v, long a, si  *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl06(vui v, long a, vui *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl07(vui v, long a, ui  *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl08(vbs v, long a, vbs *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl09(vp  v, long a, vp  *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl10(vss v, long a, vss *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl11(vss v, long a, ss  *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl12(vus v, long a, vus *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl13(vus v, long a, us  *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl14(vbc v, long a, vbc *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl15(vsc v, long a, vsc *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl16(vsc v, long a, sc  *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl17(vuc v, long a, vuc *p)  { __builtin_vec_stvswxl (v,a,p); }
+void ssxl18(vuc v, long a, uc  *p)  { __builtin_vec_stvswxl (v,a,p); }
+void Dssxl01(vsf v, long a, vsf *p) { vec_stvswxl (v,a,p); }
+void Dssxl02(vsf v, long a, sf  *p) { vec_stvswxl (v,a,p); }
+void Dssxl03(vbi v, long a, vbi *p) { vec_stvswxl (v,a,p); }
+void Dssxl04(vsi v, long a, vsi *p) { vec_stvswxl (v,a,p); }
+void Dssxl05(vsi v, long a, si  *p) { vec_stvswxl (v,a,p); }
+void Dssxl06(vui v, long a, vui *p) { vec_stvswxl (v,a,p); }
+void Dssxl07(vui v, long a, ui  *p) { vec_stvswxl (v,a,p); }
+void Dssxl08(vbs v, long a, vbs *p) { vec_stvswxl (v,a,p); }
+void Dssxl09(vp  v, long a, vp  *p) { vec_stvswxl (v,a,p); }
+void Dssxl10(vss v, long a, vss *p) { vec_stvswxl (v,a,p); }
+void Dssxl11(vss v, long a, ss  *p) { vec_stvswxl (v,a,p); }
+void Dssxl12(vus v, long a, vus *p) { vec_stvswxl (v,a,p); }
+void Dssxl13(vus v, long a, us  *p) { vec_stvswxl (v,a,p); }
+void Dssxl14(vbc v, long a, vbc *p) { vec_stvswxl (v,a,p); }
+void Dssxl15(vsc v, long a, vsc *p) { vec_stvswxl (v,a,p); }
+void Dssxl16(vsc v, long a, sc  *p) { vec_stvswxl (v,a,p); }
+void Dssxl17(vuc v, long a, vuc *p) { vec_stvswxl (v,a,p); }
+void Dssxl18(vuc v, long a, uc  *p) { vec_stvswxl (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-22.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-22.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-22.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-22.c	2012-03-01 13:35:04.739039002 -0600
@@ -0,0 +1,66 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvsm" 37 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lsm(long a, void *p)           { return __builtin_altivec_lvsm (a,p); }
+vsf  lm01(long a, vsf *p)           { return __builtin_vec_lvsm (a,p); }
+vsf  lm02(long a, sf *p)            { return __builtin_vec_lvsm (a,p); }
+vbi  lm03(long a, vbi *p)           { return __builtin_vec_lvsm (a,p); }
+vsi  lm04(long a, vsi *p)           { return __builtin_vec_lvsm (a,p); }
+vsi  lm05(long a, si *p)            { return __builtin_vec_lvsm (a,p); }
+vui  lm06(long a, vui *p)           { return __builtin_vec_lvsm (a,p); }
+vui  lm07(long a, ui *p)            { return __builtin_vec_lvsm (a,p); }
+vbs  lm08(long a, vbs *p)           { return __builtin_vec_lvsm (a,p); }
+vp   lm09(long a, vp *p)            { return __builtin_vec_lvsm (a,p); }
+vss  lm10(long a, vss *p)           { return __builtin_vec_lvsm (a,p); }
+vss  lm11(long a, ss *p)            { return __builtin_vec_lvsm (a,p); }
+vus  lm12(long a, vus *p)           { return __builtin_vec_lvsm (a,p); }
+vus  lm13(long a, us *p)            { return __builtin_vec_lvsm (a,p); }
+vbc  lm14(long a, vbc *p)           { return __builtin_vec_lvsm (a,p); }
+vsc  lm15(long a, vsc *p)           { return __builtin_vec_lvsm (a,p); }
+vsc  lm16(long a, sc *p)            { return __builtin_vec_lvsm (a,p); }
+vuc  lm17(long a, vuc *p)           { return __builtin_vec_lvsm (a,p); }
+vuc  lm18(long a, uc *p)            { return __builtin_vec_lvsm (a,p); }
+vsf  Dlm01(long a, vsf *p)          { return vec_lvsm (a,p); }
+vsf  Dlm02(long a, sf *p)           { return vec_lvsm (a,p); }
+vbi  Dlm03(long a, vbi *p)          { return vec_lvsm (a,p); }
+vsi  Dlm04(long a, vsi *p)          { return vec_lvsm (a,p); }
+vsi  Dlm05(long a, si *p)           { return vec_lvsm (a,p); }
+vui  Dlm06(long a, vui *p)          { return vec_lvsm (a,p); }
+vui  Dlm07(long a, ui *p)           { return vec_lvsm (a,p); }
+vbs  Dlm08(long a, vbs *p)          { return vec_lvsm (a,p); }
+vp   Dlm09(long a, vp *p)           { return vec_lvsm (a,p); }
+vss  Dlm10(long a, vss *p)          { return vec_lvsm (a,p); }
+vss  Dlm11(long a, ss *p)           { return vec_lvsm (a,p); }
+vus  Dlm12(long a, vus *p)          { return vec_lvsm (a,p); }
+vus  Dlm13(long a, us *p)           { return vec_lvsm (a,p); }
+vbc  Dlm14(long a, vbc *p)          { return vec_lvsm (a,p); }
+vsc  Dlm15(long a, vsc *p)          { return vec_lvsm (a,p); }
+vsc  Dlm16(long a, sc *p)           { return vec_lvsm (a,p); }
+vuc  Dlm17(long a, vuc *p)          { return vec_lvsm (a,p); }
+vuc  Dlm18(long a, uc *p)           { return vec_lvsm (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-2.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-2.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-2.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-2.c	2012-03-01 13:35:04.769039002 -0600
@@ -0,0 +1,36 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "vabsduh" 7 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vus  fa2h(vus a, vus b)             { return __builtin_altivec_vabsduh (a,b); }
+vus  ad4(vus a, vus b)              { return __builtin_vec_absd (a,b); }
+vus  ad5(vbs a, vus b)              { return __builtin_vec_absd (a,b); }
+vus  ad6(vus a, vbs b)              { return __builtin_vec_absd (a,b); }
+vus  Dad4(vus a, vus b)             { return vec_absd (a,b); }
+vus  Dad5(vbs a, vus b)             { return vec_absd (a,b); }
+vus  Dad6(vus a, vbs b)             { return vec_absd (a,b); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-3.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-3.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-3.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-3.c	2012-03-01 13:35:04.798039002 -0600
@@ -0,0 +1,36 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "vabsduw" 7 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vui  fa3w(vui a, vui b)             { return __builtin_altivec_vabsduw (a,b); }
+vui  ad7(vui a, vui b)              { return __builtin_vec_absd (a,b); }
+vui  ad8(vbi a, vui b)              { return __builtin_vec_absd (a,b); }
+vui  ad9(vui a, vbi b)              { return __builtin_vec_absd (a,b); }
+vui  Dad7(vui a, vui b)             { return vec_absd (a,b); }
+vui  Dad8(vbi a, vui b)             { return vec_absd (a,b); }
+vui  Dad9(vui a, vbi b)             { return vec_absd (a,b); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-4.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-4.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-4.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-4.c	2012-03-01 13:35:04.828039002 -0600
@@ -0,0 +1,34 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvexbx" 5 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  le1b(long a, void *p)          { return __builtin_altivec_lvexbx (a,p); }
+vsc  leb1(long a, sc *p)            { return __builtin_vec_lvexbx (a,p); }
+vuc  leb2(long a, uc *p)            { return __builtin_vec_lvexbx (a,p); }
+vsc  Dleb1(long a, sc *p)           { return vec_lvexbx (a,p); }
+vuc  Dleb2(long a, uc *p)           { return vec_lvexbx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-5.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-5.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-5.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-5.c	2012-03-01 13:35:04.859039002 -0600
@@ -0,0 +1,34 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvexhx" 5 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vss  le2h(long a, void *p)          { return __builtin_altivec_lvexhx (a,p); }
+vss  leh1(long a, ss *p)            { return __builtin_vec_lvexhx (a,p); }
+vus  leh2(long a, us *p)            { return __builtin_vec_lvexhx (a,p); }
+vss  Dleh1(long a, ss *p)           { return vec_lvexhx (a,p); }
+vus  Dleh2(long a, us *p)           { return vec_lvexhx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-6.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-6.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-6.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-6.c	2012-03-01 13:35:04.888039002 -0600
@@ -0,0 +1,40 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "lvexwx" 11 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsi  le3w(long a, void *p)          { return __builtin_altivec_lvexwx (a,p); }
+vsf  lew1(long a, sf *p)            { return __builtin_vec_lvexwx (a,p); }
+vsi  lew2(long a, si *p)            { return __builtin_vec_lvexwx (a,p); }
+vui  lew3(long a, ui *p)            { return __builtin_vec_lvexwx (a,p); }
+vsi  lew4(long a, sl *p)            { return __builtin_vec_lvexwx (a,p); }
+vui  lew5(long a, ul *p)            { return __builtin_vec_lvexwx (a,p); }
+vsf  Dlew1(long a, sf *p)           { return vec_lvexwx (a,p); }
+vsi  Dlew2(long a, si *p)           { return vec_lvexwx (a,p); }
+vui  Dlew3(long a, ui *p)           { return vec_lvexwx (a,p); }
+vsi  Dlew4(long a, sl *p)           { return vec_lvexwx (a,p); }
+vui  Dlew5(long a, ul *p)           { return vec_lvexwx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-7.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-7.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-7.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-7.c	2012-03-01 13:35:04.919039002 -0600
@@ -0,0 +1,42 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvexbx" 13 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void se1b(vsc v, long a, vsc *p)    { __builtin_altivec_stvexbx (v,a,p); }
+void seb1(vsc v, long a, sc *p)     { __builtin_vec_stvexbx (v,a,p); }
+void seb2(vuc v, long a, uc *p)     { __builtin_vec_stvexbx (v,a,p); }
+void seb3(vbc v, long a, sc *p)     { __builtin_vec_stvexbx (v,a,p); }
+void seb4(vbc v, long a, uc *p)     { __builtin_vec_stvexbx (v,a,p); }
+void seb5(vsc v, long a, void *p)   { __builtin_vec_stvexbx (v,a,p); }
+void seb6(vuc v, long a, void *p)   { __builtin_vec_stvexbx (v,a,p); }
+void Dseb1(vsc v, long a, sc *p)    { vec_stvexbx (v,a,p); }
+void Dseb2(vuc v, long a, uc *p)    { vec_stvexbx (v,a,p); }
+void Dseb3(vbc v, long a, sc *p)    { vec_stvexbx (v,a,p); }
+void Dseb4(vbc v, long a, uc *p)    { vec_stvexbx (v,a,p); }
+void Dseb5(vsc v, long a, void *p)  { vec_stvexbx (v,a,p); }
+void Dseb6(vuc v, long a, void *p)  { vec_stvexbx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-8.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-8.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-8.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-8.c	2012-03-01 13:35:04.947039002 -0600
@@ -0,0 +1,42 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvexhx" 13 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void se2h(vss v, long a, vss *p)    { __builtin_altivec_stvexhx (v,a,p); }
+void seh1(vss v, long a, ss *p)     { __builtin_vec_stvexhx (v,a,p); }
+void seh2(vus v, long a, us *p)     { __builtin_vec_stvexhx (v,a,p); }
+void seh3(vbs v, long a, ss *p)     { __builtin_vec_stvexhx (v,a,p); }
+void seh4(vbs v, long a, us *p)     { __builtin_vec_stvexhx (v,a,p); }
+void seh5(vss v, long a, void *p)   { __builtin_vec_stvexhx (v,a,p); }
+void seh6(vus v, long a, void *p)   { __builtin_vec_stvexhx (v,a,p); }
+void Dseh1(vss v, long a, ss *p)    { vec_stvexhx (v,a,p); }
+void Dseh2(vus v, long a, us *p)    { vec_stvexhx (v,a,p); }
+void Dseh3(vbs v, long a, ss *p)    { vec_stvexhx (v,a,p); }
+void Dseh4(vbs v, long a, us *p)    { vec_stvexhx (v,a,p); }
+void Dseh5(vss v, long a, void *p)  { vec_stvexhx (v,a,p); }
+void Dseh6(vus v, long a, void *p)  { vec_stvexhx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-9.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-9.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-9.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/altivec2_builtin-9.c	2012-03-01 13:35:04.967039002 -0600
@@ -0,0 +1,46 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -maltivec2" } */
+/* { dg-final { scan-assembler-times "stvexwx" 17 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void se3w(vsi v, long a, vsi *p)    { __builtin_altivec_stvexwx (v,a,p); }
+void sew1(vsf v, long a, sf *p)     { __builtin_vec_stvexwx (v,a,p); }
+void sew2(vsi v, long a, si *p)     { __builtin_vec_stvexwx (v,a,p); }
+void sew3(vui v, long a, ui *p)     { __builtin_vec_stvexwx (v,a,p); }
+void sew4(vbi v, long a, si *p)     { __builtin_vec_stvexwx (v,a,p); }
+void sew5(vbi v, long a, ui *p)     { __builtin_vec_stvexwx (v,a,p); }
+void sew6(vsf v, long a, void *p)   { __builtin_vec_stvexwx (v,a,p); }
+void sew7(vsi v, long a, void *p)   { __builtin_vec_stvexwx (v,a,p); }
+void sew8(vui v, long a, void *p)   { __builtin_vec_stvexwx (v,a,p); }
+void Dsew1(vsf v, long a, sf *p)    { vec_stvexwx (v,a,p); }
+void Dsew2(vsi v, long a, si *p)    { vec_stvexwx (v,a,p); }
+void Dsew3(vui v, long a, ui *p)    { vec_stvexwx (v,a,p); }
+void Dsew4(vbi v, long a, si *p)    { vec_stvexwx (v,a,p); }
+void Dsew5(vbi v, long a, ui *p)    { vec_stvexwx (v,a,p); }
+void Dsew6(vsf v, long a, void *p)  { vec_stvexwx (v,a,p); }
+void Dsew7(vsi v, long a, void *p)  { vec_stvexwx (v,a,p); }
+void Dsew8(vui v, long a, void *p)  { vec_stvexwx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-1.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-1.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-1.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-1.c	2012-03-01 13:47:03.397039000 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "lvlx" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc1(long a, void *p)           { return __builtin_altivec_lvlx (a,p); }
+vsf  llx01(long a, vsf *p)          { return __builtin_vec_lvlx (a,p); }
+vsf  llx02(long a, sf *p)           { return __builtin_vec_lvlx (a,p); }
+vbi  llx03(long a, vbi *p)          { return __builtin_vec_lvlx (a,p); }
+vsi  llx04(long a, vsi *p)          { return __builtin_vec_lvlx (a,p); }
+vsi  llx05(long a, si *p)           { return __builtin_vec_lvlx (a,p); }
+vui  llx06(long a, vui *p)          { return __builtin_vec_lvlx (a,p); }
+vui  llx07(long a, ui *p)           { return __builtin_vec_lvlx (a,p); }
+vbs  llx08(long a, vbs *p)          { return __builtin_vec_lvlx (a,p); }
+vp   llx09(long a, vp *p)           { return __builtin_vec_lvlx (a,p); }
+vss  llx10(long a, vss *p)          { return __builtin_vec_lvlx (a,p); }
+vss  llx11(long a, ss *p)           { return __builtin_vec_lvlx (a,p); }
+vus  llx12(long a, vus *p)          { return __builtin_vec_lvlx (a,p); }
+vus  llx13(long a, us *p)           { return __builtin_vec_lvlx (a,p); }
+vbc  llx14(long a, vbc *p)          { return __builtin_vec_lvlx (a,p); }
+vsc  llx15(long a, vsc *p)          { return __builtin_vec_lvlx (a,p); }
+vsc  llx16(long a, sc *p)           { return __builtin_vec_lvlx (a,p); }
+vuc  llx17(long a, vuc *p)          { return __builtin_vec_lvlx (a,p); }
+vuc  llx18(long a, uc *p)           { return __builtin_vec_lvlx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-2.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-2.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-2.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-2.c	2012-03-01 13:47:03.427038997 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "lvlxl" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc2(long a, void *p)           { return __builtin_altivec_lvlxl (a,p); }
+vsf  llxl01(long a, vsf *p)         { return __builtin_vec_lvlxl (a,p); }
+vsf  llxl02(long a, sf *p)          { return __builtin_vec_lvlxl (a,p); }
+vbi  llxl03(long a, vbi *p)         { return __builtin_vec_lvlxl (a,p); }
+vsi  llxl04(long a, vsi *p)         { return __builtin_vec_lvlxl (a,p); }
+vsi  llxl05(long a, si *p)          { return __builtin_vec_lvlxl (a,p); }
+vui  llxl06(long a, vui *p)         { return __builtin_vec_lvlxl (a,p); }
+vui  llxl07(long a, ui *p)          { return __builtin_vec_lvlxl (a,p); }
+vbs  llxl08(long a, vbs *p)         { return __builtin_vec_lvlxl (a,p); }
+vp   llxl09(long a, vp *p)          { return __builtin_vec_lvlxl (a,p); }
+vss  llxl10(long a, vss *p)         { return __builtin_vec_lvlxl (a,p); }
+vss  llxl11(long a, ss *p)          { return __builtin_vec_lvlxl (a,p); }
+vus  llxl12(long a, vus *p)         { return __builtin_vec_lvlxl (a,p); }
+vus  llxl13(long a, us *p)          { return __builtin_vec_lvlxl (a,p); }
+vbc  llxl14(long a, vbc *p)         { return __builtin_vec_lvlxl (a,p); }
+vsc  llxl15(long a, vsc *p)         { return __builtin_vec_lvlxl (a,p); }
+vsc  llxl16(long a, sc *p)          { return __builtin_vec_lvlxl (a,p); }
+vuc  llxl17(long a, vuc *p)         { return __builtin_vec_lvlxl (a,p); }
+vuc  llxl18(long a, uc *p)          { return __builtin_vec_lvlxl (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-3.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-3.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-3.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-3.c	2012-03-01 13:47:03.457038951 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "lvrx" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc3(long a, void *p)           { return __builtin_altivec_lvrx (a,p); }
+vsf  lrx01(long a, vsf *p)          { return __builtin_vec_lvrx (a,p); }
+vsf  lrx02(long a, sf *p)           { return __builtin_vec_lvrx (a,p); }
+vbi  lrx03(long a, vbi *p)          { return __builtin_vec_lvrx (a,p); }
+vsi  lrx04(long a, vsi *p)          { return __builtin_vec_lvrx (a,p); }
+vsi  lrx05(long a, si *p)           { return __builtin_vec_lvrx (a,p); }
+vui  lrx06(long a, vui *p)          { return __builtin_vec_lvrx (a,p); }
+vui  lrx07(long a, ui *p)           { return __builtin_vec_lvrx (a,p); }
+vbs  lrx08(long a, vbs *p)          { return __builtin_vec_lvrx (a,p); }
+vp   lrx09(long a, vp *p)           { return __builtin_vec_lvrx (a,p); }
+vss  lrx10(long a, vss *p)          { return __builtin_vec_lvrx (a,p); }
+vss  lrx11(long a, ss *p)           { return __builtin_vec_lvrx (a,p); }
+vus  lrx12(long a, vus *p)          { return __builtin_vec_lvrx (a,p); }
+vus  lrx13(long a, us *p)           { return __builtin_vec_lvrx (a,p); }
+vbc  lrx14(long a, vbc *p)          { return __builtin_vec_lvrx (a,p); }
+vsc  lrx15(long a, vsc *p)          { return __builtin_vec_lvrx (a,p); }
+vsc  lrx16(long a, sc *p)           { return __builtin_vec_lvrx (a,p); }
+vuc  lrx17(long a, vuc *p)          { return __builtin_vec_lvrx (a,p); }
+vuc  lrx18(long a, uc *p)           { return __builtin_vec_lvrx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-4.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-4.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-4.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-4.c	2012-03-01 13:47:03.487039003 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "lvrxl" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc4(long a, void *p)           { return __builtin_altivec_lvrxl (a,p); }
+vsf  lrxl01(long a, vsf *p)         { return __builtin_vec_lvrxl (a,p); }
+vsf  lrxl02(long a, sf *p)          { return __builtin_vec_lvrxl (a,p); }
+vbi  lrxl03(long a, vbi *p)         { return __builtin_vec_lvrxl (a,p); }
+vsi  lrxl04(long a, vsi *p)         { return __builtin_vec_lvrxl (a,p); }
+vsi  lrxl05(long a, si *p)          { return __builtin_vec_lvrxl (a,p); }
+vui  lrxl06(long a, vui *p)         { return __builtin_vec_lvrxl (a,p); }
+vui  lrxl07(long a, ui *p)          { return __builtin_vec_lvrxl (a,p); }
+vbs  lrxl08(long a, vbs *p)         { return __builtin_vec_lvrxl (a,p); }
+vp   lrxl09(long a, vp *p)          { return __builtin_vec_lvrxl (a,p); }
+vss  lrxl10(long a, vss *p)         { return __builtin_vec_lvrxl (a,p); }
+vss  lrxl11(long a, ss *p)          { return __builtin_vec_lvrxl (a,p); }
+vus  lrxl12(long a, vus *p)         { return __builtin_vec_lvrxl (a,p); }
+vus  lrxl13(long a, us *p)          { return __builtin_vec_lvrxl (a,p); }
+vbc  lrxl14(long a, vbc *p)         { return __builtin_vec_lvrxl (a,p); }
+vsc  lrxl15(long a, vsc *p)         { return __builtin_vec_lvrxl (a,p); }
+vsc  lrxl16(long a, sc *p)          { return __builtin_vec_lvrxl (a,p); }
+vuc  lrxl17(long a, vuc *p)         { return __builtin_vec_lvrxl (a,p); }
+vuc  lrxl18(long a, uc *p)          { return __builtin_vec_lvrxl (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-5.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-5.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-5.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-5.c	2012-03-01 13:47:03.517038996 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "stvlx" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc1(vsc v, long a, void *p)    { __builtin_altivec_stvlx (v,a,p); }
+void slx01(vsf v, long a, vsf *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx02(vsf v, long a, sf *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx03(vbi v, long a, vbi *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx04(vsi v, long a, vsi *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx05(vsi v, long a, si *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx06(vui v, long a, vui *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx07(vui v, long a, ui *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx08(vbs v, long a, vbs *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx09(vp v, long a, vp *p)     { __builtin_vec_stvlx (v,a,p); }
+void slx10(vss v, long a, vss *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx11(vss v, long a, ss *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx12(vus v, long a, vus *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx13(vus v, long a, us *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx14(vbc v, long a, vbc *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx15(vsc v, long a, vsc *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx16(vsc v, long a, sc *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx17(vuc v, long a, vuc *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx18(vuc v, long a, uc *p)    { __builtin_vec_stvlx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-6.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-6.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-6.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-6.c	2012-03-01 13:47:03.546038988 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "stvlxl" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc2(vsc v, long a, void *p)    { __builtin_altivec_stvlxl (v,a,p); }
+void slxl01(vsf v, long a, vsf *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl02(vsf v, long a, sf *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl03(vbi v, long a, vbi *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl04(vsi v, long a, vsi *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl05(vsi v, long a, si *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl06(vui v, long a, vui *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl07(vui v, long a, ui *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl08(vbs v, long a, vbs *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl09(vp v, long a, vp *p)    { __builtin_vec_stvlxl (v,a,p); }
+void slxl10(vss v, long a, vss *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl11(vss v, long a, ss *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl12(vus v, long a, vus *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl13(vus v, long a, us *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl14(vbc v, long a, vbc *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl15(vsc v, long a, vsc *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl16(vsc v, long a, sc *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl17(vuc v, long a, vuc *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl18(vuc v, long a, uc *p)   { __builtin_vec_stvlxl (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-7.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-7.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-7.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-7.c	2012-03-01 13:47:03.577039037 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "stvrx" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc3(vsc v, long a, void *p)    { __builtin_altivec_stvrx (v,a,p); }
+void srx01(vsf v, long a, vsf *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx02(vsf v, long a, sf *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx03(vbi v, long a, vbi *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx04(vsi v, long a, vsi *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx05(vsi v, long a, si *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx06(vui v, long a, vui *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx07(vui v, long a, ui *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx08(vbs v, long a, vbs *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx09(vp v, long a, vp *p)     { __builtin_vec_stvrx (v,a,p); }
+void srx10(vss v, long a, vss *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx11(vss v, long a, ss *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx12(vus v, long a, vus *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx13(vus v, long a, us *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx14(vbc v, long a, vbc *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx15(vsc v, long a, vsc *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx16(vsc v, long a, sc *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx17(vuc v, long a, vuc *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx18(vuc v, long a, uc *p)    { __builtin_vec_stvrx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-8.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-8.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-8.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-8.c	2012-03-01 13:47:03.607038950 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "stvrxl" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc4(vsc v, long a, void *p)    { __builtin_altivec_stvrxl (v,a,p); }
+void srxl01(vsf v, long a, vsf *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl02(vsf v, long a, sf *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl03(vbi v, long a, vbi *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl04(vsi v, long a, vsi *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl05(vsi v, long a, si *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl06(vui v, long a, vui *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl07(vui v, long a, ui *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl08(vbs v, long a, vbs *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl09(vp v, long a, vp *p)    { __builtin_vec_stvrxl (v,a,p); }
+void srxl10(vss v, long a, vss *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl11(vss v, long a, ss *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl12(vus v, long a, vus *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl13(vus v, long a, us *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl14(vbc v, long a, vbc *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl15(vsc v, long a, vsc *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl16(vsc v, long a, sc *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl17(vuc v, long a, vuc *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl18(vuc v, long a, uc *p)   { __builtin_vec_stvrxl (v,a,p); }

[-- Attachment #3: sub_e6500-gcc-Changelog --]
[-- Type: text/plain, Size: 4519 bytes --]

2012-03-01  Edmar Wienskoski  edmar@freescale.com

	* config.gcc (cpu_is_64bit): Add new cores e5500, e6500.
	(powerpc*-*-*): Add new cores e5500, e6500.
	* config/rs6000/rs6000-cpus.def: Add cpu definitions for e5500 and
	e6500.
	* config/rs6000/e5500.md: New file.
	* config/rs6000/e6500.md: New file.
	* config/rs6000/rs6000.opt: Add new option for altivec2.
	* config/rs6000/rs6000-opt.h (processor_type): Add
	PROCESSOR_PPCE5500 and PROCESSOR_PPCE6500.
	* config/rs6000/rs6000.h (ASM_CPU_SPEC): Add e5500 and e6500.
	(TARGET_LFIWAX): Exclude e5500 and e6500.
	(TARGET_LFIWZX): Ditto.
	(TARGET_FCFIDS): Re-maps to TARGET_LFIWZX.
	(TARGET_FCFIDU): Ditto.
	(TARGET_FCFIDUS): Ditto.
	(TARGET_FCTIDUZ): Ditto.
	(TARGET_FCTIWUZ): Ditto.
	(TARGET_FRE): Exclude e5500 and e6500.
	(TARGET_FRSQRTES): Ditto.
	(RS6000_BTM_ALTIVEC2): New.
	(RS6000_BTM_COMMON): Add RS6000_BTM_ALTIVEC2.
	* config/rs6000/rs6000.md (define_attr "type"): New type popcnt.
	(define_attr "cpu"): Add ppce5500 and ppce6500.
	Include e5500.md and e6500.md.
	(popcntb<mode>2): Add attribute type popcnt.
	(popcntd<mode>2): Ditto.
	(copysign<mode>3): Re-maps to TARGET_LFIWAX.
	(copysign<mode>3_fcpsgn): Ditto.
	* config/rs6000/rs6000.c (processor_costs): Add new costs for
	e5500 and e6500.
	(POWERPC_MASKS): Add new mask for altivec2.
	(rs6000_builtin_mask_calculate): Add new builtin mask for
	altivec2.
	(rs6000_option_override_internal): Altivec and Spe options not
	allowed with e5500. Spe options not allowed with e6500. Increase
	move inline limit for e5500 and e6500. Disable fsqrt instructions
	for e5500 and e6500. Disable mfocr instruction for e5500. Disable
	string instructions for e5500 and e6500. Enable branch targets
	alignment for e5500 and e6500. Initialize rs6000_cost for e5500
	and e6500.
	(altivec_expand_builtin): Add store vector expansion cases for
	stvexbx, stvexhx, stvexwx, stvflx, stvflxl, stvfrx, stvfrxl,
	stvswx, stvswxl. Add load vector expansion cases for lvexbx,
	lvexhx, lvexwx, lvtlx, lvtlxl, lvtrx, lvtrxl, lvswx, lvswxl, lvsm.
	(altivec_init_builtins): Add builtin and override builtin
	definitions for lvexbx, lvexhx, lvexwx, stvexbx, stvexhx, stvexwx,
	lvtlx, lvtlxl, lvtrx, lvtrxl, stvflx, stvflxl, stvfrx, stvfrxl,
	lvswx, lvswxl, lvsm, stvswx, stvswxl.
	(builtin_function_type): Set unsigned type flags for vabsdub,
	vabsduh, vabsduw.
	(rs6000_adjust_cost): Add extra scheduling cycles between compare
	and brnach for e5500 and e6500.
	(rs6000_issue_rate): Set issue rate for e5500 and e6500.
	(rs6000_builtin_mask_names): Add entry for altivec2 mask.
	* config/rs6000/altivec.md (unspec): New unspecs: UNSPEC_LVEX,
	UNSPEC_STVEX, UNSPEC_LVTLX, UNSPEC_LVTLXL, UNSPEC_LVTRX,
	UNSPEC_LVTRXL, UNSPEC_STVFLX, UNSPEC_STVFLXL, UNSPEC_STVFRX,
	UNSPEC_STVFRXL, UNSPEC_LVSWX, UNSPEC_LVSWXL, UNSPEC_STVSWX,
	UNSPEC_STVSWXL, UNSPEC_LVSM, UNSPEC_VABSDUB, UNSPEC_VABSDUH,
	UNSPEC_VABSDUW.
	(altivec_vabsduw): New altivec2 insn. Use new unspec.
	(altivec_vabsduh): Ditto.
	(altivec_vabsdub): Ditto.
	(altivec_lvex<VI_char>x): Ditto.
	(altivec_stvex<VI_char>x): Ditto.
	(altivec_lvtlx): Ditto.
	(altivec_lvtlxl): Ditto.
	(altivec_lvtrx): Ditto.
	(altivec_lvtrxl): Ditto.
	(altivec_stvflx): Ditto.
	(altivec_stvflxl): Ditto.
	(altivec_stvfrx): Ditto.
	(altivec_stvfrxl): Ditto.
	(altivec_lvswx): Ditto.
	(altivec_lvswxl): Ditto.
	(altivec_lvsm): Ditto.
	(altivec_stvswx): Ditto.
	(altivec_stvswxl): Ditto.
	(altivec_stvlx): Change machine mode of operands.
	(altivec_stvlxl): Ditto.
	(altivec_stvrx): Ditto.
	(altivec_stvrxl): Ditto.
	* config/rs6000/altivec.h: New altivec2 synonyms.
	* config/rs6000/rs6000-builtin.def: New altivec2 convenience
	macros. Use macros to enter new altivec2 builtin definitions and
	overload builtin definitions for VABSDUB, VABSDUH, VABSDUW,
	LVEXBX, LVEXHX, LVEXWX, LVTLX, LVTLXL, LVTRX, LVTRXL, LVSWX,
	LVSWXL, LVSM, STVEXBX, STVEXHX, STVEXWX, STVFLX, STVFLXL, STVFRX,
	STVFRXL, STVSWX, STVSWXL.
	* config/rs6000/rs6000-c.c (rs6000_target_modify_macros): New
	macro definition for altivec2.
	(altivec_overloaded_builtins): New entries for altivec2 overloads.
	* doc/extend.texi: Document altivec2 builtins.
	* doc/invoke.texi: Add altivec2 PowerPC option.
	(mpopcntb): Document e5500, e6500 implementations.
	(mpopcntd): Document float point conversion instruction and e5500,
	e6500 implementations.
	(mcmpb): Document copy sign instruction and e5500, e6500
	implementations.
	(item -mcpu): Add e5500 and e6500 to list of cpus. Document
	altivec2 PowerPC option.
	(item -maltivec2): New.

[-- Attachment #4: sub_e6500-gcc-Changelog-testsuite --]
[-- Type: text/plain, Size: 1628 bytes --]

2012-03-01  Edmar Wienskoski  edmar@freescale.com

	* gcc.dg/tree-ssa/vector-3.c: Adjust regular expression.
	* gcc.target/powerpc/altivec2_builtin_1.c: New test case.
	* gcc.target/powerpc/altivec2_builtin_2.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_3.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_4.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_5.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_6.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_7.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_8.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_9.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_10.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_11.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_12.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_13.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_14.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_15.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_16.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_17.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_18.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_19.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_20.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_21.c: Ditto.
	* gcc.target/powerpc/altivec2_builtin_22.c: Ditto.
	* gcc.target/powerpc/cell_builtin_1.c: Ditto.
	* gcc.target/powerpc/cell_builtin_2.c: Ditto.
	* gcc.target/powerpc/cell_builtin_3.c: Ditto.
	* gcc.target/powerpc/cell_builtin_4.c: Ditto.
	* gcc.target/powerpc/cell_builtin_5.c: Ditto.
	* gcc.target/powerpc/cell_builtin_6.c: Ditto.
	* gcc.target/powerpc/cell_builtin_7.c: Ditto.
	* gcc.target/powerpc/cell_builtin_8.c: Ditto.

[-- Attachment #5: sub_cell-gcc.diff --]
[-- Type: text/x-patch, Size: 23130 bytes --]

diff -ruN gcc-20120223-orig/gcc/config/rs6000/altivec.md gcc-20120223/gcc/config/rs6000/altivec.md
--- gcc-20120223-orig/gcc/config/rs6000/altivec.md	2012-02-23 16:31:51.000000000 -0600
+++ gcc-20120223/gcc/config/rs6000/altivec.md	2012-02-24 13:18:06.612039003 -0600
@@ -2318,8 +2380,8 @@
 
 (define_insn "altivec_stvlx"
   [(parallel
-    [(set (match_operand:V4SI 0 "memory_operand" "=Z")
-	  (match_operand:V4SI 1 "register_operand" "v"))
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
      (unspec [(const_int 0)] UNSPEC_STVLX)])]
   "TARGET_ALTIVEC && rs6000_cpu == PROCESSOR_CELL"
   "stvlx %1,%y0"
@@ -2327,8 +2389,8 @@
 
 (define_insn "altivec_stvlxl"
   [(parallel
-    [(set (match_operand:V4SI 0 "memory_operand" "=Z")
-	  (match_operand:V4SI 1 "register_operand" "v"))
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
      (unspec [(const_int 0)] UNSPEC_STVLXL)])]
   "TARGET_ALTIVEC && rs6000_cpu == PROCESSOR_CELL"
   "stvlxl %1,%y0"
@@ -2336,8 +2398,8 @@
 
 (define_insn "altivec_stvrx"
   [(parallel
-    [(set (match_operand:V4SI 0 "memory_operand" "=Z")
-	  (match_operand:V4SI 1 "register_operand" "v"))
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
      (unspec [(const_int 0)] UNSPEC_STVRX)])]
   "TARGET_ALTIVEC && rs6000_cpu == PROCESSOR_CELL"
   "stvrx %1,%y0"
@@ -2345,8 +2407,8 @@
 
 (define_insn "altivec_stvrxl"
   [(parallel
-    [(set (match_operand:V4SI 0 "memory_operand" "=Z")
-	  (match_operand:V4SI 1 "register_operand" "v"))
+    [(set (match_operand:V16QI 0 "memory_operand" "=Z")
+	  (match_operand:V16QI 1 "register_operand" "v"))
      (unspec [(const_int 0)] UNSPEC_STVRXL)])]
   "TARGET_ALTIVEC && rs6000_cpu == PROCESSOR_CELL"
   "stvrxl %1,%y0"
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-1.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-1.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-1.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-1.c	2012-03-01 13:47:03.397039000 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "lvlx" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc1(long a, void *p)           { return __builtin_altivec_lvlx (a,p); }
+vsf  llx01(long a, vsf *p)          { return __builtin_vec_lvlx (a,p); }
+vsf  llx02(long a, sf *p)           { return __builtin_vec_lvlx (a,p); }
+vbi  llx03(long a, vbi *p)          { return __builtin_vec_lvlx (a,p); }
+vsi  llx04(long a, vsi *p)          { return __builtin_vec_lvlx (a,p); }
+vsi  llx05(long a, si *p)           { return __builtin_vec_lvlx (a,p); }
+vui  llx06(long a, vui *p)          { return __builtin_vec_lvlx (a,p); }
+vui  llx07(long a, ui *p)           { return __builtin_vec_lvlx (a,p); }
+vbs  llx08(long a, vbs *p)          { return __builtin_vec_lvlx (a,p); }
+vp   llx09(long a, vp *p)           { return __builtin_vec_lvlx (a,p); }
+vss  llx10(long a, vss *p)          { return __builtin_vec_lvlx (a,p); }
+vss  llx11(long a, ss *p)           { return __builtin_vec_lvlx (a,p); }
+vus  llx12(long a, vus *p)          { return __builtin_vec_lvlx (a,p); }
+vus  llx13(long a, us *p)           { return __builtin_vec_lvlx (a,p); }
+vbc  llx14(long a, vbc *p)          { return __builtin_vec_lvlx (a,p); }
+vsc  llx15(long a, vsc *p)          { return __builtin_vec_lvlx (a,p); }
+vsc  llx16(long a, sc *p)           { return __builtin_vec_lvlx (a,p); }
+vuc  llx17(long a, vuc *p)          { return __builtin_vec_lvlx (a,p); }
+vuc  llx18(long a, uc *p)           { return __builtin_vec_lvlx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-2.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-2.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-2.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-2.c	2012-03-01 13:47:03.427038997 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "lvlxl" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc2(long a, void *p)           { return __builtin_altivec_lvlxl (a,p); }
+vsf  llxl01(long a, vsf *p)         { return __builtin_vec_lvlxl (a,p); }
+vsf  llxl02(long a, sf *p)          { return __builtin_vec_lvlxl (a,p); }
+vbi  llxl03(long a, vbi *p)         { return __builtin_vec_lvlxl (a,p); }
+vsi  llxl04(long a, vsi *p)         { return __builtin_vec_lvlxl (a,p); }
+vsi  llxl05(long a, si *p)          { return __builtin_vec_lvlxl (a,p); }
+vui  llxl06(long a, vui *p)         { return __builtin_vec_lvlxl (a,p); }
+vui  llxl07(long a, ui *p)          { return __builtin_vec_lvlxl (a,p); }
+vbs  llxl08(long a, vbs *p)         { return __builtin_vec_lvlxl (a,p); }
+vp   llxl09(long a, vp *p)          { return __builtin_vec_lvlxl (a,p); }
+vss  llxl10(long a, vss *p)         { return __builtin_vec_lvlxl (a,p); }
+vss  llxl11(long a, ss *p)          { return __builtin_vec_lvlxl (a,p); }
+vus  llxl12(long a, vus *p)         { return __builtin_vec_lvlxl (a,p); }
+vus  llxl13(long a, us *p)          { return __builtin_vec_lvlxl (a,p); }
+vbc  llxl14(long a, vbc *p)         { return __builtin_vec_lvlxl (a,p); }
+vsc  llxl15(long a, vsc *p)         { return __builtin_vec_lvlxl (a,p); }
+vsc  llxl16(long a, sc *p)          { return __builtin_vec_lvlxl (a,p); }
+vuc  llxl17(long a, vuc *p)         { return __builtin_vec_lvlxl (a,p); }
+vuc  llxl18(long a, uc *p)          { return __builtin_vec_lvlxl (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-3.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-3.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-3.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-3.c	2012-03-01 13:47:03.457038951 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "lvrx" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc3(long a, void *p)           { return __builtin_altivec_lvrx (a,p); }
+vsf  lrx01(long a, vsf *p)          { return __builtin_vec_lvrx (a,p); }
+vsf  lrx02(long a, sf *p)           { return __builtin_vec_lvrx (a,p); }
+vbi  lrx03(long a, vbi *p)          { return __builtin_vec_lvrx (a,p); }
+vsi  lrx04(long a, vsi *p)          { return __builtin_vec_lvrx (a,p); }
+vsi  lrx05(long a, si *p)           { return __builtin_vec_lvrx (a,p); }
+vui  lrx06(long a, vui *p)          { return __builtin_vec_lvrx (a,p); }
+vui  lrx07(long a, ui *p)           { return __builtin_vec_lvrx (a,p); }
+vbs  lrx08(long a, vbs *p)          { return __builtin_vec_lvrx (a,p); }
+vp   lrx09(long a, vp *p)           { return __builtin_vec_lvrx (a,p); }
+vss  lrx10(long a, vss *p)          { return __builtin_vec_lvrx (a,p); }
+vss  lrx11(long a, ss *p)           { return __builtin_vec_lvrx (a,p); }
+vus  lrx12(long a, vus *p)          { return __builtin_vec_lvrx (a,p); }
+vus  lrx13(long a, us *p)           { return __builtin_vec_lvrx (a,p); }
+vbc  lrx14(long a, vbc *p)          { return __builtin_vec_lvrx (a,p); }
+vsc  lrx15(long a, vsc *p)          { return __builtin_vec_lvrx (a,p); }
+vsc  lrx16(long a, sc *p)           { return __builtin_vec_lvrx (a,p); }
+vuc  lrx17(long a, vuc *p)          { return __builtin_vec_lvrx (a,p); }
+vuc  lrx18(long a, uc *p)           { return __builtin_vec_lvrx (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-4.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-4.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-4.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-4.c	2012-03-01 13:47:03.487039003 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "lvrxl" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+vsc  lc4(long a, void *p)           { return __builtin_altivec_lvrxl (a,p); }
+vsf  lrxl01(long a, vsf *p)         { return __builtin_vec_lvrxl (a,p); }
+vsf  lrxl02(long a, sf *p)          { return __builtin_vec_lvrxl (a,p); }
+vbi  lrxl03(long a, vbi *p)         { return __builtin_vec_lvrxl (a,p); }
+vsi  lrxl04(long a, vsi *p)         { return __builtin_vec_lvrxl (a,p); }
+vsi  lrxl05(long a, si *p)          { return __builtin_vec_lvrxl (a,p); }
+vui  lrxl06(long a, vui *p)         { return __builtin_vec_lvrxl (a,p); }
+vui  lrxl07(long a, ui *p)          { return __builtin_vec_lvrxl (a,p); }
+vbs  lrxl08(long a, vbs *p)         { return __builtin_vec_lvrxl (a,p); }
+vp   lrxl09(long a, vp *p)          { return __builtin_vec_lvrxl (a,p); }
+vss  lrxl10(long a, vss *p)         { return __builtin_vec_lvrxl (a,p); }
+vss  lrxl11(long a, ss *p)          { return __builtin_vec_lvrxl (a,p); }
+vus  lrxl12(long a, vus *p)         { return __builtin_vec_lvrxl (a,p); }
+vus  lrxl13(long a, us *p)          { return __builtin_vec_lvrxl (a,p); }
+vbc  lrxl14(long a, vbc *p)         { return __builtin_vec_lvrxl (a,p); }
+vsc  lrxl15(long a, vsc *p)         { return __builtin_vec_lvrxl (a,p); }
+vsc  lrxl16(long a, sc *p)          { return __builtin_vec_lvrxl (a,p); }
+vuc  lrxl17(long a, vuc *p)         { return __builtin_vec_lvrxl (a,p); }
+vuc  lrxl18(long a, uc *p)          { return __builtin_vec_lvrxl (a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-5.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-5.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-5.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-5.c	2012-03-01 13:47:03.517038996 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "stvlx" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc1(vsc v, long a, void *p)    { __builtin_altivec_stvlx (v,a,p); }
+void slx01(vsf v, long a, vsf *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx02(vsf v, long a, sf *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx03(vbi v, long a, vbi *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx04(vsi v, long a, vsi *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx05(vsi v, long a, si *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx06(vui v, long a, vui *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx07(vui v, long a, ui *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx08(vbs v, long a, vbs *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx09(vp v, long a, vp *p)     { __builtin_vec_stvlx (v,a,p); }
+void slx10(vss v, long a, vss *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx11(vss v, long a, ss *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx12(vus v, long a, vus *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx13(vus v, long a, us *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx14(vbc v, long a, vbc *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx15(vsc v, long a, vsc *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx16(vsc v, long a, sc *p)    { __builtin_vec_stvlx (v,a,p); }
+void slx17(vuc v, long a, vuc *p)   { __builtin_vec_stvlx (v,a,p); }
+void slx18(vuc v, long a, uc *p)    { __builtin_vec_stvlx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-6.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-6.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-6.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-6.c	2012-03-01 13:47:03.546038988 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "stvlxl" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc2(vsc v, long a, void *p)    { __builtin_altivec_stvlxl (v,a,p); }
+void slxl01(vsf v, long a, vsf *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl02(vsf v, long a, sf *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl03(vbi v, long a, vbi *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl04(vsi v, long a, vsi *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl05(vsi v, long a, si *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl06(vui v, long a, vui *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl07(vui v, long a, ui *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl08(vbs v, long a, vbs *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl09(vp v, long a, vp *p)    { __builtin_vec_stvlxl (v,a,p); }
+void slxl10(vss v, long a, vss *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl11(vss v, long a, ss *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl12(vus v, long a, vus *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl13(vus v, long a, us *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl14(vbc v, long a, vbc *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl15(vsc v, long a, vsc *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl16(vsc v, long a, sc *p)   { __builtin_vec_stvlxl (v,a,p); }
+void slxl17(vuc v, long a, vuc *p)  { __builtin_vec_stvlxl (v,a,p); }
+void slxl18(vuc v, long a, uc *p)   { __builtin_vec_stvlxl (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-7.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-7.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-7.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-7.c	2012-03-01 13:47:03.577039037 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "stvrx" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc3(vsc v, long a, void *p)    { __builtin_altivec_stvrx (v,a,p); }
+void srx01(vsf v, long a, vsf *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx02(vsf v, long a, sf *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx03(vbi v, long a, vbi *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx04(vsi v, long a, vsi *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx05(vsi v, long a, si *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx06(vui v, long a, vui *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx07(vui v, long a, ui *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx08(vbs v, long a, vbs *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx09(vp v, long a, vp *p)     { __builtin_vec_stvrx (v,a,p); }
+void srx10(vss v, long a, vss *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx11(vss v, long a, ss *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx12(vus v, long a, vus *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx13(vus v, long a, us *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx14(vbc v, long a, vbc *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx15(vsc v, long a, vsc *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx16(vsc v, long a, sc *p)    { __builtin_vec_stvrx (v,a,p); }
+void srx17(vuc v, long a, vuc *p)   { __builtin_vec_stvrx (v,a,p); }
+void srx18(vuc v, long a, uc *p)    { __builtin_vec_stvrx (v,a,p); }
diff -ruN gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-8.c gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-8.c
--- gcc-20120223-orig/gcc/testsuite/gcc.target/powerpc/cell_builtin-8.c	1969-12-31 18:00:00.000000000 -0600
+++ gcc-20120223/gcc/testsuite/gcc.target/powerpc/cell_builtin-8.c	2012-03-01 13:47:03.607038950 -0600
@@ -0,0 +1,48 @@
+/* { dg-do compile { target { powerpc*-*-* } } } */
+/* { dg-skip-if "" { powerpc*-*-darwin* } { "*" } { "" } } */
+/* { dg-require-effective-target powerpc_altivec_ok } */
+/* { dg-options "-O2 -maltivec -mcpu=cell" } */
+/* { dg-final { scan-assembler-times "stvrxl" 19 } } */
+
+#include <altivec.h>
+
+typedef __vector signed char vsc;
+typedef __vector signed short vss;
+typedef __vector signed int vsi;
+typedef __vector unsigned char vuc;
+typedef __vector unsigned short vus;
+typedef __vector unsigned int vui;
+typedef __vector bool char vbc;
+typedef __vector bool short vbs;
+typedef __vector bool int vbi;
+typedef __vector float vsf;
+typedef __vector pixel vp;
+typedef signed char sc;
+typedef signed short ss;
+typedef signed int si;
+typedef signed long sl;
+typedef unsigned char uc;
+typedef unsigned short us;
+typedef unsigned int ui;
+typedef unsigned long ul;
+typedef float sf;
+
+void sc4(vsc v, long a, void *p)    { __builtin_altivec_stvrxl (v,a,p); }
+void srxl01(vsf v, long a, vsf *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl02(vsf v, long a, sf *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl03(vbi v, long a, vbi *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl04(vsi v, long a, vsi *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl05(vsi v, long a, si *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl06(vui v, long a, vui *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl07(vui v, long a, ui *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl08(vbs v, long a, vbs *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl09(vp v, long a, vp *p)    { __builtin_vec_stvrxl (v,a,p); }
+void srxl10(vss v, long a, vss *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl11(vss v, long a, ss *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl12(vus v, long a, vus *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl13(vus v, long a, us *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl14(vbc v, long a, vbc *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl15(vsc v, long a, vsc *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl16(vsc v, long a, sc *p)   { __builtin_vec_stvrxl (v,a,p); }
+void srxl17(vuc v, long a, vuc *p)  { __builtin_vec_stvrxl (v,a,p); }
+void srxl18(vuc v, long a, uc *p)   { __builtin_vec_stvrxl (v,a,p); }

[-- Attachment #6: sub_cell-gcc-Changelog --]
[-- Type: text/plain, Size: 207 bytes --]

2012-03-01  Edmar Wienskoski  edmar@freescale.com

	* config/rs6000/altivec.md (altivec_stvlx): Change machine mode of
	operands.
	(altivec_stvlxl): Ditto.
	(altivec_stvrx): Ditto.
	(altivec_stvrxl): Ditto.

[-- Attachment #7: sub_cell-gcc-Changelog-testsuite --]
[-- Type: text/plain, Size: 427 bytes --]

2012-03-01  Edmar Wienskoski  edmar@freescale.com

	* gcc.target/powerpc/cell_builtin_1.c: Ditto.
	* gcc.target/powerpc/cell_builtin_2.c: Ditto.
	* gcc.target/powerpc/cell_builtin_3.c: Ditto.
	* gcc.target/powerpc/cell_builtin_4.c: Ditto.
	* gcc.target/powerpc/cell_builtin_5.c: Ditto.
	* gcc.target/powerpc/cell_builtin_6.c: Ditto.
	* gcc.target/powerpc/cell_builtin_7.c: Ditto.
	* gcc.target/powerpc/cell_builtin_8.c: Ditto.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-06-06 12:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-06 17:45 [RFA] PowerPC e5500 and e6500 cores support Edmar
2012-05-17 22:16 ` Michael Meissner
2012-05-18 20:17   ` Edmar
2012-05-21 18:51     ` David Edelsohn
2012-05-23 15:20       ` Edmar
2012-05-23 18:33         ` David Edelsohn
2012-06-01 16:58       ` Edmar
2012-06-05  0:45         ` David Edelsohn
2012-06-05 20:24           ` Edmar
2012-06-06 13:52             ` David Edelsohn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).